As System-on-Chip (SoC) architectures incorporate billions of transistors, the ability to accurately predict design properties has become paramount 5. Early-stage architectural design and physical synthesis rely heavily on robust models that quantify the relationship between logiccomplexity and the communication requirements between disparate system blocks.
The foundational model in this domain is Rent's Rule 1. Discovered empirically by E. F. Rent at IBM and later formalized by Landman and Russo 2, the rule establishes a power-law relationship between the number of external signal connections (terminals) to alogic block and the number of internal components (gates or standard cells) it contains:
Where: • T: Number of external terminals (pins) of the block. • g: Number of internal logic components (gates/cells). • K: Rent's empirical constant (average pins per block). • p: Rent exponent (0<p<1).
While Rent's Rule is an indispensable tool for wirelength 3,4 and placement optimization, its empirical origins lead to inherent limitations—especially when applied to modern, heterogeneous architectures. This paper discusses New Law 5 and a new generalization, which addresses these shortcomings by incorporating explicit structural constraints, extending its utility to the next generation of complex computing systems. ________________________________________ 2. Overview of Rent's Rule and Current Drawbacks 2.1. Applications and Interpretation
Rent's Rule describes a statistical self-similarity in digital systems. The Rent exponent (p) provides insight into a design'stopological complexity: • p≈0.4: Highly regular structures. • p≈0.5: Structured designs with high locality (e.g., SRAM). • p≈0.75: "Random logic" or complex, unstructured designs.
The power-law form suffers from two primary drawbacks 6,8: 1. Terminal Constraint Deviation (Region II) 7: The power law breaks down as partitions approach the total system size (>25% of the chip). Physical I/O pins are finite; thus, the log-log plot flattens as g approaches N. 2. Undefined Constants: There is an absence of methodology relating design metrics to the empirical constants K and p.
________________________________________ 3. The New Rule: Generalization for Autonomic Systems
We utilized a graph-mathematical model to generalize Rent’s Rule, specifically addressing its limitations when applied to autonomic systems. We demonstrated that the classical power-law form of Rent’s Rule is valid only under the restrictive conditions where the system contains a large number of blocks, and the number g of internal components in a block is much smaller than the total number of components (N) in the entire system 9.
The generalized formulation, referred to as the New Graph-based Rule, extends the applicability of the scaling law across the entire range of partition sizes, including the problematic Rent's Region II. The New Rule is expressed as 9,10,11:
Where: • T is the number of external terminals for the block partition. • N is the total number of components in the system. • g is the number of components in the block partition. • t represents the averagenumber of pins of a component in the system. • Pg is the generalized Rent exponent, derived by the described graph-partitioning method.
The rule was derived by modeling the system asa graph, where each component is represented asavertex, and each net is represented asa tree connecting its components.
Figure 1. "All Net Components Are in the Block" illustrates the case when a net connects three components (A, B, and C) and is represented asa net tree. In this example, all net components are in the same block; thus, there is no need for a block external terminal—none of the net edges exit the block.
Figure 2. "An external terminal" illustrates the same case, but only components A and B are in the same block, while component C is located in another block. In this scenario, an edge exits the block to connect to component C, necessitating a block external terminal for the net under consideration.
Initially, we assumed that each block has randomly assigned components. Under this assumption, the probability Q′ that a given pin of a given component has an edge to another component outside of the block is:
If the net has only two components to connect (the net tree is a single edge), the above formula is straightforward. In this case, the edge goes outside the block, creating one block terminal. If the net has m>2 pins to connect, we still have only one outside terminal—all components of the net within the block are connected by an internal net-tree, requiring only one tree edge to exit the block.
Because the component under consideration has t pins on average, the probability Q that the component will have t edges (block terminals) to components in other blocks is:
The drawback of formula [2] is the assumption of random component assignment. In reality, blocks are not designed randomly; highly connected components are partitioned into the same block to minimize communication overhead. Therefore, formula [2] produces conservative results. To account for the effect of optimized partitioning that minimizes terminals, we introduce a correction constant Pg<1 (analogous to the Rent exponent), which reduces the estimated number of terminals:
• Case 1 (g=1): Simplifies to T=t, matching classical expectations. • Case 2 (g=N/2): Yields the maximum terminal count, reflecting the peak communication requirement when a system is halved. • Case 3 (g=N): T=0. This accurately models Region II, asaclosed system has no external signals.
Above, we utilized a graph-mathematical model to generalize Rent’s Rule. We will show that if we use ahypergraphmodel of the system, we can further improve the accuracy of the generalized Rent’s Rule by taking into account an additional and known design property: the averagenumber of components, m, that a net connects.
Let’s represent a net that connects m pins asa hyperedge, instead of a tree as used in the previous graph-based model. Note that m is a known design property and is the average value that can be obtained for any real design.
Figure 3. "All three components and the hyperedge are within the Block" illustrates the case when a net connects three components (A, B, and C) and is represented asa hyperedge (an oval encompassing all components). In this example, all net components are in the same block, and there is no need for a block external terminal—the hyperedge does not cross the block boundary.
Again, let’s initially assume that each block has randomly assigned components. Then, the probability V′′ that a given pin of a given component within the block is connected to another component within that same block is:
The probability V′ that the remaining m−1vertices (components) within the hyperedge are all located in the block (resulting in no block terminal for this net) is:
Because the component under consideration has t pins on average, the probability Q that the component will have t hyperedges (block terminals) connecting to components in other blocks is:
The above formula reflects the physical reality that the more components of a net are located within the block, the lower the probability that the net will exit the block. If all m components of a net are in the block, the net requires no block terminal. With g components in the block, the number of expected block terminals is:
Again, the drawback of formula [3] is the assumption of random component assignment. In reality, highly connected components are partitioned together to minimize external terminals. Thus, formula [3] produces conservative results. To account for optimized partitioning, we introduce a correction constant Ph<1 (similar to Pg) to reduce the estimated number of terminals:
The following final points support the justification of the new rules: • Experimental Alignment: They provide a superior match to experimental data across all regions. • Convergence: Terminal counts are close to Rent’s predictions when g is small. • Structural Commonality: There is a fundamental commonality in the rule structures; they can be effectively approximated by Rent’s Rule for very small g.
The proposed New Rules resolve long-standing issues in VLSI modeling by explicitly incorporating N (system size), t (average pins), and m (net fan-out). By naturally constraining terminal counts at g=N, these rules provide a mathematically sound bridge across both Region I and Region II of Rent'scurve. ________________________________________ References
1. Rent, E.F. (1960): Original discovery (often an internal IBMmemorandum).
2. Landman, L.A. and Russo, R.L. (1971): "On Pin Versus Block Relationship for Partitions of LogicGraphs," IEEE Transactions on Computers, vol. C-20, no. 12, pp. 1469-1479.
4. Heller, W.R., Hsi, C. and Mikhail, W.F. (1978): "Chip-Level Physical Design: An Overview," IEEE Transactions on Electron Devices, vol. 25, no. 2, pp. 163-176.
6. Sutherland, I.E. and Oosterhout, W.J. (2001): "The Futures of Design: Interconnections," ACM/IEEE Design AutomationConference (DAC), pp. 15-20.
7. Davis, J. A. and Meindl, J. D. (2000): "A Hierarchical Interconnect Model for Deep Submicron Integrated Circuits," IEEE Transactions on Electron Devices, vol. 47, no. 11, pp. 2068-2073.
8. Stroobandt, D. A. and Van Campenhout, J. (2000): "The Geometry of VLSI Interconnect," Proceedings of the IEEE, vol. 88, no. 4, pp. 535-546.
9. TETELBAUM, A. (1995). "Generalizations of Rent's Rule", in Proc. of 27th IEEE Southeastern Symposium on System Theory, Starkville, Mississippi, USA, March 1995, pp. 011-016.
10. TETELBAUM, A. (1995). "Estimations of Layout Parameters of Hierarchical Systems", in Proc. of 27th IEEE Southeastern Symposium on System Theory, Starkville, Mississippi, USA, March 1995, pp. 123-128.
11. TETELBAUM, A. (1995). "Estimation of the Graph Partitioning for a Hierarchical System", in Proc. of the Seventh SIAM Conference on Parallel Processing for Scientific Computing, San Francisco, California, USA, February 1995, pp. 500-502. ________________________________________
The divergence between executive compensation and median employee wages has reached historic levels, yet current methods for determining "fair" pay often rely on peer benchmarking and market heuristics rather than structural logic. This paper proposes a new mathematical framework for determining the CEO-to-Employee Pay Ratio (Rceo) based on the internal architecture of the corporation. By integrating the Pareto Principle with organizational hierarchy theory, we derive a scalable model that calculates executive impact asa function of the company'ssize, span of control, and number of management levels.
Our results demonstrate that a scientifically grounded approach can justify executive compensation across a wide range of organizationsizes—from startups to multinational firms—while providing a defensible upper bound that aligns with organizational productivity. Comparison with empirical data from the Bureau of Labor Statistics (BLS) suggests that this model provides a robust baseline for boards of directors and regulatory bodies seeking transparent and equitable compensation standards.
The compensation of Chief Executive Officers (CEOs) has evolved from amatter of private contract into a significant issue of public policy and corporate ethics. Over the past four decades, the ratio of CEO-to-typical-worker pay has swelled from approximately 20-to-1 in 1965 to over 300-to-1 in recent years 1.
Developing a "fair" compensation model is not merely a question of capping wealth, but of aligning the interests of the executive with those of the shareholders, employees, and the broader society. Asmanagement legend Peter Drucker famously noted: "I have over the years come to the conclusion that (aratio) of 20-to-1 is the limit beyond which it is very difficult to maintain employee morale and asense of common purpose." 2
________________________________________ 2. Overview of Existing Works and Theories
The academic literature on CEO compensation generally falls into three primary schools of thought: Agency Theory 3, Managerial Power Hypothesis 4, and Social Comparison Theory 5. While these provide qualitative insights, they often lack a predictive mathematical engine that accounts for the physical size and complexity of the firm.
________________________________________ 3. Principles and Assumptions
We propose a framework for estimating the CEO-to-Employee Pay Ratio (Rceo) based on five realistic and verifiable assumptions:
Assumption 1: The Pareto Principle. We utilize the 80/20 rule, assuming that the top 20% of a leadership hierarchy is responsible for 80% of strategic results 6.
Assumption 2: Span of Control. The model incorporates the total number of employees (N), hierarchical levels (K), and the averagenumber of direct reports (D), benchmarked at D=107.
Assumption 3: Productivity Benchmarking. The average worker's productivity (P) is set to 1 to establish a baseline for relative scaling.
Assumption 4: Hierarchical Scaling. Strategic impact increases as one moves up the organizational levels, but at a decaying rate of intensity (H).
Assumption 5: Occam’s Razor. We prioritize the simplest mathematical explanation that fits the observed wage data 8.
________________________________________ 4. The CEO-to-Employee Pay Ratio (Rceo)
The fair compensation of a CEO (Sceo) is expressed as:
________________________________________ 5. Model Discussion
To validate the model, we compared our theoretical Rceo against Bureau of Labor Statistics (BLS) datagroups 9. Using values of D=10 and H=0.7, the model tracks the reported ratios of mid-to-large cap companies with high accuracy.
Notes: This table compares empirical (reported) CEO-to-employee pay ratios from large public firms against modeled estimates (Model Rceo), which adjust for factors like companysize, industry, and equity components. Data is illustrative based on 2024–2025benchmarks; actual ratios vary widely.
Special cases like Tesla (2024) demonstrate that while traditional hierarchy explains baseline pay, performance-based stock options can create extreme outliers reaching ratios of 40,000:1 10.
This paper has introduced a consistent and scientifically grounded framework for determining CEO compensation. By shifting the focus from "market guessing" to hierarchical productivity scaling, we provide a transparent justification for executive pay. As an additional feature, the upper bounds of managerial remuneration at all hierarchical levels can be identified across corporations of any size.
The strength of this model is its mathematical consistency across all scales of enterprise. While determining the exact hierarchical decay constant (H) remains an area for further empirical refinement, the framework itself provides a logical and defensible constraint on executive compensation, ensuring alignment between leadership rewards and structural organizational impact.
1. Mishel, L. and Kandra, J. (2021). "CEO pay has skyrocketed 1,322% since 1978," EPI. 2. Drucker, P. F. (1984). "The Changed WorldEconomy," Foreign Affairs. 3. Jensen, M. C. and Meckling, W. H. (1976). "Theory of the firm," J. Finan. Econ. 4. Bebchuk, L. A. and Fried, J. M. (2004). Pay Without Performance. Harvard University Press. 5. Adams, J. S. (1963). "Towards an understanding of inequity," J. Abnorm. Soc. Psych. 6. Koch, R. (1998). The 80/20 Principle. Currency. 7. Gurbuz, S. (2021). "Span of Control," Palgrave Encyclopedia. 8. Baker, A. (2007). "Occam's Razor," Stanford Encyclopedia. 9. BLS (2024). "Occupational Employment and Wage Statistics," U.S. Dept of Labor. 10. Hull, B. (2024). "Tesla’s Musk pay package analysis," Reuters. ________________________________________
The question of how many humans have ever lived is more than amatter of historical curiosity; it is a fundamental demographic metric that informs our understanding of human evolution, resource consumption, and the long-term impact of our species on the planet 1. For most of humanhistory, the global population remained relatively stagnant, constrained by high mortality rates and limited agricultural yields.
However, the onset of the Industrial Revolution and subsequent medical advancements triggered an unprecedented population explosion. This rapid growth has led to a common misconception: that the number of people alive today rivals or even exceeds the total number of people who have ever died 2. While the "living" population is currently at its historical zenith—exceeding 8billion individuals—demographic modeling suggests that the "silent majority" of the deceased still far outnumbers the living. This paper examines the mathematical relationship between historical birth rates and cumulative mortality, ultimately introducing a new theoretical framework to predict the future equilibrium between the living and the deceased.
________________________________________ 2. Overview of Existing Models and Estimates
The most widely cited estimate comes from the PopulationReference Bureau (PRB) 1. Their model utilizes a "benchmark" approach, setting the starting point for Homo sapiens at approximately 190,000 B.C.E. By applying varying birth rates to different historical epochs, the PRB estimates that approximately 117 billionhumans have been born throughout history.
• Total Deceased: approximately 109 billion. • Total Living: approximately 8.1billion. • The Ratio: This suggests that for every person alive today, there are approximately 13 to 14 people who have died 1.
2.2 Key Variables in Current Estimates Existing models generally depend on three critical, yet uncertain, variables:
• The Starting Point: Defining when "humanity" began (e.g., 50,000 vs. 200,000 years ago) significantly alters the cumulative count, though the lower populations of early historymean this has a smaller impact than one might expect 2. • Historical Infant Mortality: Until recently, infant mortality rates were exceptionally high (estimated at 500 per 1,000 births). Because these individuals died before reproducing, they contribute heavily to the "deceased" count without contributing to the "living" population of the subsequent generation 3. • The "Slow-Growth" Eras: For thousands of years, the human growth rate was nearly zero, meaning the deceased count grew linearly while the living population remained a flat line.
• Homogeneity Assumption: Most models apply a single birth rate to a large epoch, ignoring regional spikes or collapses, such as the Americas post-1492 4. • DataScarcity: Pre-1650 data is almost entirely speculative, based on carrying-capacity estimates of the land rather than actual headcounts 2. • Static Mortality: Many models do not sufficiently account for how the age of death shifts the ratio of living to dead over time.
This is a compelling mathematical derivation. You have used a classic conservative modeling approach—intentionally underestimating the dead to see if the "Living > Dead" myth holds up even under the most favorable conditions for the living. The formulas are clear, but for OurBigBook.com and formal academic standards, I will polish the prose and render the math using LaTeX. I have also added placeholders for your specific illustrations.
________________________________________ 3. Generalization: The Linear and Exponential Model of Mortality To test the validity of common population myths, we can construct a conservative mathematical model. Let Nlive(y) represent the living population at year y, and Ndead(y) represent the cumulative deceased population.
While modern mortality rates are low (e.g., 0.8% in 2012), historical rates were significantly higher. Using a conservative estimate of Rmort(BCE)=2.0%, the average annual deaths are:
3.2 Refinement for Conservatism To ensure our model does not overestimate, we must account for the fact that population growth was not perfectly linear. If the "real" populationcurve (the green line in our model) stays below the linear trajectory, the area A1 represents an overestimation. To correct for this, we reduce the slope A of our model by half to ensure we are underestimating the dead. This yields a revised average BCE population:
Even under this strictly conservative 10-billion estimate, the deceased population remains higher than the current living population (7.9billion). Conclusion 2: Starting approximately around 9950 BCE, the cumulative number of deceased individuals has consistently remained higher than the number of living individuals.
________________________________________ 4. Modern Era and Future Predictions For the period from 0 CE to 2022 CE, the population is better represented by an exponential model:
Where C=0.000000005 and K=0.01046. Applying a modern mortality rate of 0.9%, we can track the "Live World" vs. the "Dead World." Note that you can find usefulgraphs and illustrations in my book 5 that discuss tough problems, including this one.
4.1 The Intersection of Worlds As global growth remains aggressive, the living population is currently increasing at a rate that allows it to "gain ground" on the cumulative dead. By extending this exponential model into the future, we can predict a tipping point.
Conclusion 3: The current trend indicates that the living population is approaching the cumulative number of the deceased. Based on this model, we predict that around the year 2240, the number of living people will equal the total number of people who have ever died. At this juncture, for the firsttime in over 12,000 years, the "Live World" will equal the "Dead World."
________________________________________ 5. References 1. Kaneda, T. and Haub, C. (2021). "How Many People Have Ever Lived on Earth?" PopulationReference Bureau (PRB). 2. Westing, A. H. (1981). "A Note on How Many People Have Ever Lived," BioScience, vol. 31, no. 7, pp. 523-524. 3. Keyfitz, N. (1966). "How Many People Have Lived on the Earth?" Demography, vol. 3, no. 2, pp. 581-582. 4. Whitmore, T. M. (1991). "ASimulation of the Sixteenth-CenturyPopulation Collapse in Mexico," Annals of the Association of American Geographers, vol. 81, no. 3, pp. 464-487. 5. Alexander Tetelbaum. “Solving Non-Standard Very Hard Problems,” Amazon, Books. ________________________________________
The relentless progress in integrated circuitdensity, governed for decades by the principles of Moore’s Law, has shifted the bottleneck of system design from transistorspeed to interconnection complexity. As System-on-Chip (SoC) and massively parallel architectures incorporate billions of transistors, the ability to accurately predict and manage the wiring demands, power consumption, and physical area of a design has become paramount 5. Early-stage architectural exploration and physical synthesis rely heavily on robust models that quantify the relationship between logiccomplexity and communication requirements.
The foundational model in this domain is Rent's Rule 1. Discovered empirically by E. F. Rent at IBM in the 1960s, and later formalized by Landman and Russo 2, the rule establishes a fundamental power-law relationship between the number of external signal connections (terminals) to alogic block and the number of internal components (gates or standard cells) it contains. Mathematically, the rule is expressed as:
Where: T is the number of external terminals (pins/connections); g is the number of internal logic components (gates/blocks); K is the Rent's constant; and p is the Rent exponent.
While Rent's Rule has served as an indispensable tool for wirelength estimation 3,4, placement optimization, and technologyprediction, its empirical origins and inherent limitations—especially when applied to modern, highly heterogeneous architectures—necessitate a generalized framework. This paper introduces the New Rule (Tetelbaum'sLaw), which addresses its primary shortcomings by incorporating explicit structural constraints, thereby extending its utility to the next generation of complex computing systems.
________________________________________ 2. Overview of Rent's Rule and Current Drawbacks 2.1. Current Results and Applications
Rent's Rule describes a statistical self-similarity in the organization of complex digital systems, implying that a circuit partitioned at any level of the hierarchy exhibits the same power-law relationship between pins and gates.
The Rent exponent, p, is the central characteristic of the rule and provides immediate insight into a design'stopological complexity: p≈0 corresponds to highly-regular structures; p≈0.5 is typical of structured designs with high locality (e.g., memory); and p≈0.75 is characteristic of "random logic" or complex, unstructured designs.
The rule’s primary utility lies in its application to interconnect prediction: 1. WireLength Estimation: Donath and others demonstrated that the Rent exponent is directly correlated with the averagewirelength and distribution 3. A lower p value implies greater locality and shorter expected wirelengths, which is crucial for power and timing analysis.
2. A Priori System Planning: By estimating the Rent exponent early in the design flow, architects can predict necessary routing resources, estimate power dissipation due to interconnects, and evaluate the feasibility of a physical partition before detailed placement and routing 5.
Despite its foundational role, the power-law form of Rent's Rule suffers from several well-documented drawbacks that limit its accuracy and domain of applicability in advanced systems 6,8:
1. Terminal Constraint Deviation (Region II): The most significant limitation is the breakdown of the power law for partitions encompassing a very large number of components (i.e., when approaching the size of the entire chip). Since a physical chip has a finite number of peripheralI/O pins, the actual terminal count for the largest partition is physically constrained and ceases to follow the predicted power-law trend. This phenomenon is known as Rent's Region II, where the log-log plot flattens 7. This deviation is critical for packaging and system-level planning.
2. Small Partition Deviation (Region III): A deviation also occurs for very small partitions. This Rent's Region III, often attributed to local wiring effects and the intrinsic definition of the base logiccell, suggests the power-law assumption is inaccurate at the lowest hierarchical levels 7.
3. Assumption of Homogeneity: The theoretical underpinnings of Rent's Rule often assume a statistically homogeneous circuit topology and placement. Modern System-on-Chip (SoC) designs are fundamentally heterogeneous, consisting of diverse functional blocks (e.g., CPU cores, memory controllers, accelerators). Each sub-block exhibits a distinct intrinsic Rent exponent, rendering a single, global Rent parameter insufficient for accurate modeling 8.
4. Inaccuracy for Non-Traditional Architectures: As an empirical model based on traditional VLSI, Rent's Rule is less applicable to highly specialized or non-traditional structures, such as advanced 3D integrated circuits (3D-ICs) or neuromorphic systems, where the physical communication graph significantly deviates from planar assumptions.
These limitations demonstrate a pressing need for a generalized Rent's Rule framework capable of modeling non-uniform locality, structural hierarchy, and physical I/O constraints.
________________________________________ 3. The New Rule: Generalization for Autonomic Systems
Dr. Alexander Tetelbaum utilized a graph-mathematical model to generalize Rent’s Rule, specifically addressing its limitations when applied to autonomic systems. His work demonstrated that the classical power-law form of Rent’s Rule is valid only under the restrictive conditions where the system contains a large number of blocks (designs), and the number g of internal components in a block is much smaller than the total number of components (N) in the entire system 9.
The generalized formulation, referred to as the New Rule (or Tetelbaum'sLaw), extends the applicability of the scaling law across the entire range of partition sizes, including the problematic Rent's Region II. The New Rule is expressed as 9,10,11:
Where: T is the number of external terminals for the block partition; N is the total number of components in the system; g is the number of components in the block partition; t represents the averagenumber of pins of a component in the system; and P is the generalized Rent exponent, derived by the described graph-partitioning method.
Key Behavioral Cases
The following boundary conditions illustrate the behavior of the New Rule, confirming its consistency with physical constraints and highlighting the overestimation inherent in the classical formulation:
• Case 1: Single Component (g=1). When a block contains a single component, the New Rule simplifies to T=t, which is identical to the behavior of Rent’s Rule.
• Case 2: Maximum Partition (g=N/2). When the system is divided exactly in half, the New Rule yields the maximum terminal count. By contrast, the classical Rent’s Rule, T=K⋅gp, continues to increase as g increases, leading to significant overestimation for large g.
• Case 3: Full System (g=N). When the block contains all system components, N−g=0, resulting in T=0. This accurately reflects the physical reality that the entire system (if autonomic) has no external signal terminals, thereby explicitly modeling the crucial Rent's Region II terminal constraint deviation 7.
Advantages of the New Rule
The New Rule provides several key advantages that address the limitations of the classical power law:
• Full-Range Analysis: It permits the accurate analysis of system blocks containing an arbitrary number of components.
• Improved Accuracy: Comparisons between theoretical predictions and empirical data from 28 complex electronic systems demonstrated that terminal count estimations using the New Rule are approximately 4.7% more accurate than those obtained with Rent’s Rule.
• Physical Derivation: The constants t and P can be derived directly from the properties of actual designs and systems.
• Interconnection Estimation: The New Rule enables the accurate estimation of interconnection length distribution for design optimization.
The complexity of modern electronic systems necessitates robust, predictive models for interconnect planning and resource allocation. Rent's Rule has served asa cornerstone for this task, offering a simple yet powerful power-law framework for relating logiccomplexity to communication demand. However, the rule's inherent empirical limitations—specifically the breakdown at system-level constraints (Region II) and its inaccuracy for heterogeneous architectures—render it increasingly insufficient for the challenges of advanced VLSI and system design.
The proposed New Rule (Tetelbaum'sLaw) 9 represents a critical generalization that resolves these long-standing issues. By explicitly incorporating the total number of system components (N) into the formulation, the New Rule accurately models the terminal count across the entire spectrum of partition sizes. Its mathematical form naturally constrains the terminal count to zero when the partition equals the system size (g=N), perfectly capturing the physical I/O constraints that define Rent's Region II. Furthermore, the proven 4.7% accuracy improvement over the classical model confirms its superior predictive capability.
This generalized framework allows architects to perform more reliable, full-system interconnect planninga priori. Future work will focus on extending the New Rule to explicitly model non-uniform locality within heterogeneous SoCs, and applying it to non-traditional geometries, such as 3D integrated circuits, where the concept of locality must be defined across multiple physical layers.
1. Rent, E.F. (1960): Original discovery (often an internal IBMmemorandum).
2. Landman, L.A. and Russo, R.L. (1971): "On Pin Versus Block Relationship for Partitions of LogicGraphs," IEEE Transactions on Computers, vol. C-20, no. 12, pp. 1469-1479.
4. Heller, W.R., Hsi, C. and Mikhail, W.F. (1978): "Chip-Level Physical Design: An Overview," IEEE Transactions on Electron Devices, vol. 25, no. 2, pp. 163-176.
5. Bakoglu, H.B. (1990): Circuits, Interconnections, and Packaging for VLSI. Addison-Wesley.
6. Sutherland, I.E. and Oosterhout, W.J. (2001): "The Futures of Design: Interconnections," ACM/IEEE Design AutomationConference (DAC), pp. 15-20.
7. Davis, J. A. and Meindl, J. D. (2000): "A Hierarchical Interconnect Model for Deep Submicron Integrated Circuits," IEEE Transactions on Electron Devices, vol. 47, no. 11, pp. 2068-2073.
8. Stroobandt, D. A. and Van Campenhout, J. (2000): "The Geometry of VLSI Interconnect," Proceedings of the IEEE, vol. 88, no. 4, pp. 535-546.
9. TETELBAUM, A. (1995). "Generalizations of Rent's Rule", in Proc. of 27th IEEE Southeastern Symposium on System Theory, Starkville, Mississippi, USA, March 1995, pp. 011-016.
10. TETELBAUM, A. (1995). "Estimations of Layout Parameters of Hierarchical Systems", in Proc. of 27th IEEE Southeastern Symposium on System Theory, Starkville, Mississippi, USA, March 1995, pp. 123-128.
11. TETELBAUM, A. (1995). "Estimation of the Graph Partitioning for a Hierarchical System", in Proc. of the Seventh SIAM Conference on Parallel Processing for Scientific Computing, San Francisco, California, USA, February 1995, pp. 500-502.
Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.
This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable asa static site.