As System-on-Chip (SoC) architectures incorporate billions of transistors, the ability to accurately predict design properties has become paramount 5. Early-stage architectural design and physical synthesis rely heavily on robust models that quantify the relationship between logiccomplexity and the communication requirements between disparate system blocks.
The foundational model in this domain is Rent's Rule 1. Discovered empirically by E. F. Rent at IBM and later formalized by Landman and Russo 2, the rule establishes a power-law relationship between the number of external signal connections (terminals) to alogic block and the number of internal components (gates or standard cells) it contains:
Where: • T: Number of external terminals (pins) of the block. • g: Number of internal logic components (gates/cells). • K: Rent's empirical constant (average pins per block). • p: Rent exponent (0<p<1).
While Rent's Rule is an indispensable tool for wirelength 3,4 and placement optimization, its empirical origins lead to inherent limitations—especially when applied to modern, heterogeneous architectures. This paper discusses New Law 5 and a new generalization, which addresses these shortcomings by incorporating explicit structural constraints, extending its utility to the next generation of complex computing systems. ________________________________________ 2. Overview of Rent's Rule and Current Drawbacks 2.1. Applications and Interpretation
Rent's Rule describes a statistical self-similarity in digital systems. The Rent exponent (p) provides insight into a design'stopological complexity: • p≈0.4: Highly regular structures. • p≈0.5: Structured designs with high locality (e.g., SRAM). • p≈0.75: "Random logic" or complex, unstructured designs.
The power-law form suffers from two primary drawbacks 6,8: 1. Terminal Constraint Deviation (Region II) 7: The power law breaks down as partitions approach the total system size (>25% of the chip). Physical I/O pins are finite; thus, the log-log plot flattens as g approaches N. 2. Undefined Constants: There is an absence of methodology relating design metrics to the empirical constants K and p.
________________________________________ 3. The New Rule: Generalization for Autonomic Systems
We utilized a graph-mathematical model to generalize Rent’s Rule, specifically addressing its limitations when applied to autonomic systems. We demonstrated that the classical power-law form of Rent’s Rule is valid only under the restrictive conditions where the system contains a large number of blocks, and the number g of internal components in a block is much smaller than the total number of components (N) in the entire system 9.
The generalized formulation, referred to as the New Graph-based Rule, extends the applicability of the scaling law across the entire range of partition sizes, including the problematic Rent's Region II. The New Rule is expressed as 9,10,11:
Where: • T is the number of external terminals for the block partition. • N is the total number of components in the system. • g is the number of components in the block partition. • t represents the averagenumber of pins of a component in the system. • Pg is the generalized Rent exponent, derived by the described graph-partitioning method.
The rule was derived by modeling the system asa graph, where each component is represented asavertex, and each net is represented asa tree connecting its components.
Figure 1. "All Net Components Are in the Block" illustrates the case when a net connects three components (A, B, and C) and is represented asa net tree. In this example, all net components are in the same block; thus, there is no need for a block external terminal—none of the net edges exit the block.
Figure 2. "An external terminal" illustrates the same case, but only components A and B are in the same block, while component C is located in another block. In this scenario, an edge exits the block to connect to component C, necessitating a block external terminal for the net under consideration.
Initially, we assumed that each block has randomly assigned components. Under this assumption, the probability Q′ that a given pin of a given component has an edge to another component outside of the block is:
If the net has only two components to connect (the net tree is a single edge), the above formula is straightforward. In this case, the edge goes outside the block, creating one block terminal. If the net has m>2 pins to connect, we still have only one outside terminal—all components of the net within the block are connected by an internal net-tree, requiring only one tree edge to exit the block.
Because the component under consideration has t pins on average, the probability Q that the component will have t edges (block terminals) to components in other blocks is:
The drawback of formula [2] is the assumption of random component assignment. In reality, blocks are not designed randomly; highly connected components are partitioned into the same block to minimize communication overhead. Therefore, formula [2] produces conservative results. To account for the effect of optimized partitioning that minimizes terminals, we introduce a correction constant Pg<1 (analogous to the Rent exponent), which reduces the estimated number of terminals:
• Case 1 (g=1): Simplifies to T=t, matching classical expectations. • Case 2 (g=N/2): Yields the maximum terminal count, reflecting the peak communication requirement when a system is halved. • Case 3 (g=N): T=0. This accurately models Region II, asaclosed system has no external signals.
Above, we utilized a graph-mathematical model to generalize Rent’s Rule. We will show that if we use ahypergraphmodel of the system, we can further improve the accuracy of the generalized Rent’s Rule by taking into account an additional and known design property: the averagenumber of components, m, that a net connects.
Let’s represent a net that connects m pins asa hyperedge, instead of a tree as used in the previous graph-based model. Note that m is a known design property and is the average value that can be obtained for any real design.
Figure 3. "All three components and the hyperedge are within the Block" illustrates the case when a net connects three components (A, B, and C) and is represented asa hyperedge (an oval encompassing all components). In this example, all net components are in the same block, and there is no need for a block external terminal—the hyperedge does not cross the block boundary.
Again, let’s initially assume that each block has randomly assigned components. Then, the probability V′′ that a given pin of a given component within the block is connected to another component within that same block is:
The probability V′ that the remaining m−1vertices (components) within the hyperedge are all located in the block (resulting in no block terminal for this net) is:
Because the component under consideration has t pins on average, the probability Q that the component will have t hyperedges (block terminals) connecting to components in other blocks is:
The above formula reflects the physical reality that the more components of a net are located within the block, the lower the probability that the net will exit the block. If all m components of a net are in the block, the net requires no block terminal. With g components in the block, the number of expected block terminals is:
Again, the drawback of formula [3] is the assumption of random component assignment. In reality, highly connected components are partitioned together to minimize external terminals. Thus, formula [3] produces conservative results. To account for optimized partitioning, we introduce a correction constant Ph<1 (similar to Pg) to reduce the estimated number of terminals:
The following final points support the justification of the new rules: • Experimental Alignment: They provide a superior match to experimental data across all regions. • Convergence: Terminal counts are close to Rent’s predictions when g is small. • Structural Commonality: There is a fundamental commonality in the rule structures; they can be effectively approximated by Rent’s Rule for very small g.
The proposed New Rules resolve long-standing issues in VLSI modeling by explicitly incorporating N (system size), t (average pins), and m (net fan-out). By naturally constraining terminal counts at g=N, these rules provide a mathematically sound bridge across both Region I and Region II of Rent'scurve. ________________________________________ References
1. Rent, E.F. (1960): Original discovery (often an internal IBMmemorandum).
2. Landman, L.A. and Russo, R.L. (1971): "On Pin Versus Block Relationship for Partitions of LogicGraphs," IEEE Transactions on Computers, vol. C-20, no. 12, pp. 1469-1479.
4. Heller, W.R., Hsi, C. and Mikhail, W.F. (1978): "Chip-Level Physical Design: An Overview," IEEE Transactions on Electron Devices, vol. 25, no. 2, pp. 163-176.
6. Sutherland, I.E. and Oosterhout, W.J. (2001): "The Futures of Design: Interconnections," ACM/IEEE Design AutomationConference (DAC), pp. 15-20.
7. Davis, J. A. and Meindl, J. D. (2000): "A Hierarchical Interconnect Model for Deep Submicron Integrated Circuits," IEEE Transactions on Electron Devices, vol. 47, no. 11, pp. 2068-2073.
8. Stroobandt, D. A. and Van Campenhout, J. (2000): "The Geometry of VLSI Interconnect," Proceedings of the IEEE, vol. 88, no. 4, pp. 535-546.
9. TETELBAUM, A. (1995). "Generalizations of Rent's Rule", in Proc. of 27th IEEE Southeastern Symposium on System Theory, Starkville, Mississippi, USA, March 1995, pp. 011-016.
10. TETELBAUM, A. (1995). "Estimations of Layout Parameters of Hierarchical Systems", in Proc. of 27th IEEE Southeastern Symposium on System Theory, Starkville, Mississippi, USA, March 1995, pp. 123-128.
11. TETELBAUM, A. (1995). "Estimation of the Graph Partitioning for a Hierarchical System", in Proc. of the Seventh SIAM Conference on Parallel Processing for Scientific Computing, San Francisco, California, USA, February 1995, pp. 500-502. ________________________________________
The divergence between executive compensation and median employee wages has reached historic levels, yet current methods for determining "fair" pay often rely on peer benchmarking and market heuristics rather than structural logic. This paper proposes a new mathematical framework for determining the CEO-to-Employee Pay Ratio (Rceo) based on the internal architecture of the corporation. By integrating the Pareto Principle with organizational hierarchy theory, we derive a scalable model that calculates executive impact asa function of the company'ssize, span of control, and number of management levels.
Our results demonstrate that a scientifically grounded approach can justify executive compensation across a wide range of organizationsizes—from startups to multinational firms—while providing a defensible upper bound that aligns with organizational productivity. Comparison with empirical data from the Bureau of Labor Statistics (BLS) suggests that this model provides a robust baseline for boards of directors and regulatory bodies seeking transparent and equitable compensation standards.
The compensation of Chief Executive Officers (CEOs) has evolved from amatter of private contract into a significant issue of public policy and corporate ethics. Over the past four decades, the ratio of CEO-to-typical-worker pay has swelled from approximately 20-to-1 in 1965 to over 300-to-1 in recent years 1.
Developing a "fair" compensation model is not merely a question of capping wealth, but of aligning the interests of the executive with those of the shareholders, employees, and the broader society. Asmanagement legend Peter Drucker famously noted: "I have over the years come to the conclusion that (aratio) of 20-to-1 is the limit beyond which it is very difficult to maintain employee morale and asense of common purpose." 2
________________________________________ 2. Overview of Existing Works and Theories
The academic literature on CEO compensation generally falls into three primary schools of thought: Agency Theory 3, Managerial Power Hypothesis 4, and Social Comparison Theory 5. While these provide qualitative insights, they often lack a predictive mathematical engine that accounts for the physical size and complexity of the firm.
________________________________________ 3. Principles and Assumptions
We propose a framework for estimating the CEO-to-Employee Pay Ratio (Rceo) based on five realistic and verifiable assumptions:
Assumption 1: The Pareto Principle. We utilize the 80/20 rule, assuming that the top 20% of a leadership hierarchy is responsible for 80% of strategic results 6.
Assumption 2: Span of Control. The model incorporates the total number of employees (N), hierarchical levels (K), and the averagenumber of direct reports (D), benchmarked at D=107.
Assumption 3: Productivity Benchmarking. The average worker's productivity (P) is set to 1 to establish a baseline for relative scaling.
Assumption 4: Hierarchical Scaling. Strategic impact increases as one moves up the organizational levels, but at a decaying rate of intensity (H).
Assumption 5: Occam’s Razor. We prioritize the simplest mathematical explanation that fits the observed wage data 8.
________________________________________ 4. The CEO-to-Employee Pay Ratio (Rceo)
The fair compensation of a CEO (Sceo) is expressed as:
The current statistics: Ranges for employee salaries (S1, S2), CEO Compensation (CEO1, CEO2), and CEO-to-Employee Pay Ratios (R:1) (R1, R2) are presented in the table below.
Notes: This table compares empirical (reported) CEO-to-employee pay ratios from large public firms against modeled estimates (Model Rceo), which adjust for factors like companysize, industry, and equity components. Data is illustrative based on 2024–2025benchmarks; actual ratios vary widely.
Special cases like Tesla (2024) demonstrate that while traditional hierarchy explains baseline pay, performance-based stock options can create extreme outliers reaching ratios of 40,000:1 10.
This paper has introduced a consistent and scientifically grounded framework for determining CEO compensation. By shifting the focus from "market guessing" to hierarchical productivity scaling, we provide a transparent justification for executive pay. As an additional feature, the upper bounds of managerial remuneration at all hierarchical levels can be identified across corporations of any size.
The strength of this model is its mathematical consistency across all scales of enterprise. While determining the exact hierarchical decay constant (H) remains an area for further empirical refinement, the framework itself provides a logical and defensible constraint on executive compensation, ensuring alignment between leadership rewards and structural organizational impact.
1. Mishel, L. and Kandra, J. (2021). "CEO pay has skyrocketed 1,322% since 1978," EPI. 2. Drucker, P. F. (1984). "The Changed WorldEconomy," Foreign Affairs. 3. Jensen, M. C. and Meckling, W. H. (1976). "Theory of the firm," J. Finan. Econ. 4. Bebchuk, L. A. and Fried, J. M. (2004). Pay Without Performance. Harvard University Press. 5. Adams, J. S. (1963). "Towards an understanding of inequity," J. Abnorm. Soc. Psych. 6. Koch, R. (1998). The 80/20 Principle. Currency. 7. Gurbuz, S. (2021). "Span of Control," Palgrave Encyclopedia. 8. Baker, A. (2007). "Occam's Razor," Stanford Encyclopedia. 9. BLS (2024). "Occupational Employment and Wage Statistics," U.S. Dept of Labor. 10. Hull, B. (2024). "Tesla’s Musk pay package analysis," Reuters. ________________________________________
The question of how many humans have ever lived is more than amatter of historical curiosity; it is a fundamental demographic metric that informs our understanding of human evolution, resource consumption, and the long-term impact of our species on the planet 1. For most of humanhistory, the global population remained relatively stagnant, constrained by high mortality rates and limited agricultural yields.
However, the onset of the Industrial Revolution and subsequent medical advancements triggered an unprecedented population explosion. This rapid growth has led to a common misconception: that the number of people alive today rivals or even exceeds the total number of people who have ever died 2. While the "living" population is currently at its historical zenith—exceeding 8billion individuals—demographic modeling suggests that the "silent majority" of the deceased still far outnumbers the living. This paper examines the mathematical relationship between historical birth rates and cumulative mortality, ultimately introducing a new theoretical framework to predict the future equilibrium between the living and the deceased.
________________________________________ 2. Overview of Existing Models and Estimates
The most widely cited estimate comes from the PopulationReference Bureau (PRB) 1. Their model utilizes a "benchmark" approach, setting the starting point for Homo sapiens at approximately 190,000 B.C.E. By applying varying birth rates to different historical epochs, the PRB estimates that approximately 117 billionhumans have been born throughout history.
• Total Deceased: approximately 109 billion. • Total Living: approximately 8.1billion. • The Ratio: This suggests that for every person alive today, there are approximately 13 to 14 people who have died 1.
2.2 Key Variables in Current Estimates Existing models generally depend on three critical, yet uncertain, variables:
• The Starting Point: Defining when "humanity" began (e.g., 50,000 vs. 200,000 years ago) significantly alters the cumulative count, though the lower populations of early historymean this has a smaller impact than one might expect 2. • Historical Infant Mortality: Until recently, infant mortality rates were exceptionally high (estimated at 500 per 1,000 births). Because these individuals died before reproducing, they contribute heavily to the "deceased" count without contributing to the "living" population of the subsequent generation 3. • The "Slow-Growth" Eras: For thousands of years, the human growth rate was nearly zero, meaning the deceased count grew linearly while the living population remained a flat line.
• Homogeneity Assumption: Most models apply a single birth rate to a large epoch, ignoring regional spikes or collapses, such as the Americas post-1492 4. • DataScarcity: Pre-1650 data is almost entirely speculative, based on carrying-capacity estimates of the land rather than actual headcounts 2. • Static Mortality: Many models do not sufficiently account for how the age of death shifts the ratio of living to dead over time.
This is a compelling mathematical derivation. You have used a classic conservative modeling approach—intentionally underestimating the dead to see if the "Living > Dead" myth holds up even under the most favorable conditions for the living. The formulas are clear, but for OurBigBook.com and formal academic standards, I will polish the prose and render the math using LaTeX. I have also added placeholders for your specific illustrations.
________________________________________ 3. Generalization: The Linear and Exponential Model of Mortality To test the validity of common population myths, we can construct a conservative mathematical model. Let Nlive(y) represent the living population at year y, and Ndead(y) represent the cumulative deceased population.
While modern mortality rates are low (e.g., 0.8% in 2012), historical rates were significantly higher. Using a conservative estimate of Rmort(BCE)=2.0%, the average annual deaths are:
3.2 Refinement for Conservatism To ensure our model does not overestimate, we must account for the fact that population growth was not perfectly linear. If the "real" populationcurve (the green line in our model) stays below the linear trajectory, the area A1 represents an overestimation. To correct for this, we reduce the slope A of our model by half to ensure we are underestimating the dead. This yields a revised average BCE population:
Even under this strictly conservative 10-billion estimate, the deceased population remains higher than the current living population (7.9billion). Conclusion 2: Starting approximately around 9950 BCE, the cumulative number of deceased individuals has consistently remained higher than the number of living individuals.
________________________________________ 4. Modern Era and Future Predictions For the period from 0 CE to 2022 CE, the population is better represented by an exponential model:
Where C=0.000000005 and K=0.01046. Applying a modern mortality rate of 0.9%, we can track the "Live World" vs. the "Dead World." Note that you can find usefulgraphs and illustrations in my book 5 that discuss tough problems, including this one.
4.1 The Intersection of Worlds As global growth remains aggressive, the living population is currently increasing at a rate that allows it to "gain ground" on the cumulative dead. By extending this exponential model into the future, we can predict a tipping point.
Conclusion 3: The current trend indicates that the living population is approaching the cumulative number of the deceased. Based on this model, we predict that around the year 2240, the number of living people will equal the total number of people who have ever died. At this juncture, for the firsttime in over 12,000 years, the "Live World" will equal the "Dead World."
________________________________________ 5. References 1. Kaneda, T. and Haub, C. (2021). "How Many People Have Ever Lived on Earth?" PopulationReference Bureau (PRB). 2. Westing, A. H. (1981). "A Note on How Many People Have Ever Lived," BioScience, vol. 31, no. 7, pp. 523-524. 3. Keyfitz, N. (1966). "How Many People Have Lived on the Earth?" Demography, vol. 3, no. 2, pp. 581-582. 4. Whitmore, T. M. (1991). "ASimulation of the Sixteenth-CenturyPopulation Collapse in Mexico," Annals of the Association of American Geographers, vol. 81, no. 3, pp. 464-487. 5. Alexander Tetelbaum. “Solving Non-Standard Very Hard Problems,” Amazon, Books. ________________________________________