1. Introduction
The question of how many humans have ever lived is more than a matter of historical curiosity; it is a fundamental demographic metric that informs our understanding of human evolution, resource consumption, and the long-term impact of our species on the planet . For most of human history, the global population remained relatively stagnant, constrained by high mortality rates and limited agricultural yields.
However, the onset of the Industrial Revolution and subsequent medical advancements triggered an unprecedented population explosion. This rapid growth has led to a common misconception: that the number of people alive today rivals or even exceeds the total number of people who have ever died .
While the "living" population is currently at its historical zenith—exceeding 8 billion individuals—demographic modeling suggests that the "silent majority" of the deceased still far outnumbers the living. This paper examines the mathematical relationship between historical birth rates and cumulative mortality, ultimately introducing a new theoretical framework to predict the future equilibrium between the living and the deceased.
________________________________________
2. Overview of Existing Models and Estimates
Estimating the total number of humans who have ever lived involves significant "demographic archaeology." Because census data only exists for a tiny fraction of human history, researchers rely on a combination of archeological evidence, historical fertility models, and life expectancy estimates .
2.1 The PRB (Population Reference Bureau) Model
The most widely cited estimate comes from the Population Reference Bureau (PRB) . Their model utilizes a "benchmark" approach, setting the starting point for Homo sapiens at approximately 190,000 B.C.E. By applying varying birth rates to different historical epochs, the PRB estimates that approximately 117 billion humans have been born throughout history.
• Total Deceased: approximately 109 billion.
• Total Living: approximately 8.1 billion.
• The Ratio: This suggests that for every person alive today, there are approximately 13 to 14 people who have died .
2.2 Key Variables in Current Estimates
Existing models generally depend on three critical, yet uncertain, variables:
• The Starting Point: Defining when "humanity" began (e.g., 50,000 vs. 200,000 years ago) significantly alters the cumulative count, though the lower populations of early history mean this has a smaller impact than one might expect .
• Historical Infant Mortality: Until recently, infant mortality rates were exceptionally high (estimated at 500 per 1,000 births). Because these individuals died before reproducing, they contribute heavily to the "deceased" count without contributing to the "living" population of the subsequent generation .
• The "Slow-Growth" Eras: For thousands of years, the human growth rate was nearly zero, meaning the deceased count grew linearly while the living population remained a flat line.
2.3 Drawbacks of Current Models
• Homogeneity Assumption: Most models apply a single birth rate to a large epoch, ignoring regional spikes or collapses, such as the Americas post-1492 .
Data Scarcity: Pre-1650 data is almost entirely speculative, based on carrying-capacity estimates of the land rather than actual headcounts .
• Static Mortality: Many models do not sufficiently account for how the age of death shifts the ratio of living to dead over time.
This is a compelling mathematical derivation. You have used a classic conservative modeling approach—intentionally underestimating the dead to see if the "Living > Dead" myth holds up even under the most favorable conditions for the living.
The formulas are clear, but for OurBigBook.com and formal academic standards, I will polish the prose and render the math using LaTeX. I have also added placeholders for your specific illustrations.
________________________________________
3. Generalization: The Linear and Exponential Model of Mortality
To test the validity of common population myths, we can construct a conservative mathematical model. Let represent the living population at year , and represent the cumulative deceased population.
3.1 Analysis of the BCE Era (10,000 BCE to 0 CE)
We begin with known benchmarks: million and million. A simple linear model provides an average population:
The number of deaths per year, , is a function of the mortality rate :
While modern mortality rates are low (e.g., in 2012), historical rates were significantly higher. Using a conservative estimate of , the average annual deaths are:
Over the 10,000-year BCE span, the cumulative dead would be:
Conclusion 1: Since the 2022 living population is billion, the deceased population already exceeded the modern living population before the Common Era began.
3.2 Refinement for Conservatism
To ensure our model does not overestimate, we must account for the fact that population growth was not perfectly linear. If the "real" population curve (the green line in our model) stays below the linear trajectory, the area represents an overestimation.
To correct for this, we reduce the slope of our model by half to ensure we are underestimating the dead. This yields a revised average BCE population:
Even under this strictly conservative 10-billion estimate, the deceased population remains higher than the current living population ( billion).
Conclusion 2: Starting approximately around 9950 BCE, the cumulative number of deceased individuals has consistently remained higher than the number of living individuals.
________________________________________
4. Modern Era and Future Predictions
For the period from 0 CE to 2022 CE, the population is better represented by an exponential model:
Where and . Applying a modern mortality rate of , we can track the "Live World" vs. the "Dead World."
Note that you can find useful graphs and illustrations in my book that discuss tough problems, including this one.
4.1 The Intersection of Worlds
As global growth remains aggressive, the living population is currently increasing at a rate that allows it to "gain ground" on the cumulative dead. By extending this exponential model into the future, we can predict a tipping point.
Conclusion 3: The current trend indicates that the living population is approaching the cumulative number of the deceased. Based on this model, we predict that around the year 2240, the number of living people will equal the total number of people who have ever died. At this juncture, for the first time in over 12,000 years, the "Live World" will equal the "Dead World."
________________________________________
5. References
1. Kaneda, T. and Haub, C. (2021). "How Many People Have Ever Lived on Earth?" Population Reference Bureau (PRB).
2. Westing, A. H. (1981). "A Note on How Many People Have Ever Lived," BioScience, vol. 31, no. 7, pp. 523-524.
3. Keyfitz, N. (1966). "How Many People Have Lived on the Earth?" Demography, vol. 3, no. 2, pp. 581-582.
4. Whitmore, T. M. (1991). "A Simulation of the Sixteenth-Century Population Collapse in Mexico," Annals of the Association of American Geographers, vol. 81, no. 3, pp. 464-487.
5. Alexander Tetelbaum. “Solving Non-Standard Very Hard Problems,” Amazon, Books.
________________________________________
1. Introduction
The relentless progress in integrated circuit density, governed for decades by the principles of Moore’s Law, has shifted the bottleneck of system design from transistor speed to interconnection complexity. As System-on-Chip (SoC) and massively parallel architectures incorporate billions of transistors, the ability to accurately predict and manage the wiring demands, power consumption, and physical area of a design has become paramount . Early-stage architectural exploration and physical synthesis rely heavily on robust models that quantify the relationship between logic complexity and communication requirements.
The foundational model in this domain is Rent's Rule . Discovered empirically by E. F. Rent at IBM in the 1960s, and later formalized by Landman and Russo , the rule establishes a fundamental power-law relationship between the number of external signal connections (terminals) to a logic block and the number of internal components (gates or standard cells) it contains. Mathematically, the rule is expressed as:
Where: is the number of external terminals (pins/connections); is the number of internal logic components (gates/blocks); is the Rent's constant; and is the Rent exponent.
While Rent's Rule has served as an indispensable tool for wire length estimation , placement optimization, and technology prediction, its empirical origins and inherent limitations—especially when applied to modern, highly heterogeneous architectures—necessitate a generalized framework. This paper introduces the New Rule (Tetelbaum's Law), which addresses its primary shortcomings by incorporating explicit structural constraints, thereby extending its utility to the next generation of complex computing systems.
________________________________________
2. Overview of Rent's Rule and Current Drawbacks
2.1. Current Results and Applications
Rent's Rule describes a statistical self-similarity in the organization of complex digital systems, implying that a circuit partitioned at any level of the hierarchy exhibits the same power-law relationship between pins and gates.
The Rent exponent, , is the central characteristic of the rule and provides immediate insight into a design's topological complexity: corresponds to highly-regular structures; is typical of structured designs with high locality (e.g., memory); and is characteristic of "random logic" or complex, unstructured designs.
The rule’s primary utility lies in its application to interconnect prediction:
1. Wire Length Estimation: Donath and others demonstrated that the Rent exponent is directly correlated with the average wire length and distribution . A lower value implies greater locality and shorter expected wire lengths, which is crucial for power and timing analysis.
2. A Priori System Planning: By estimating the Rent exponent early in the design flow, architects can predict necessary routing resources, estimate power dissipation due to interconnects, and evaluate the feasibility of a physical partition before detailed placement and routing .
2.2. Drawbacks and Limitations
Despite its foundational role, the power-law form of Rent's Rule suffers from several well-documented drawbacks that limit its accuracy and domain of applicability in advanced systems :
1. Terminal Constraint Deviation (Region II): The most significant limitation is the breakdown of the power law for partitions encompassing a very large number of components (i.e., when approaching the size of the entire chip). Since a physical chip has a finite number of peripheral I/O pins, the actual terminal count for the largest partition is physically constrained and ceases to follow the predicted power-law trend. This phenomenon is known as Rent's Region II, where the log-log plot flattens . This deviation is critical for packaging and system-level planning.
2. Small Partition Deviation (Region III): A deviation also occurs for very small partitions. This Rent's Region III, often attributed to local wiring effects and the intrinsic definition of the base logic cell, suggests the power-law assumption is inaccurate at the lowest hierarchical levels .
3. Assumption of Homogeneity: The theoretical underpinnings of Rent's Rule often assume a statistically homogeneous circuit topology and placement. Modern System-on-Chip (SoC) designs are fundamentally heterogeneous, consisting of diverse functional blocks (e.g., CPU cores, memory controllers, accelerators). Each sub-block exhibits a distinct intrinsic Rent exponent, rendering a single, global Rent parameter insufficient for accurate modeling .
4. Inaccuracy for Non-Traditional Architectures: As an empirical model based on traditional VLSI, Rent's Rule is less applicable to highly specialized or non-traditional structures, such as advanced 3D integrated circuits (3D-ICs) or neuromorphic systems, where the physical communication graph significantly deviates from planar assumptions.
These limitations demonstrate a pressing need for a generalized Rent's Rule framework capable of modeling non-uniform locality, structural hierarchy, and physical I/O constraints.
________________________________________
3. The New Rule: Generalization for Autonomic Systems
Dr. Alexander Tetelbaum utilized a graph-mathematical model to generalize Rent’s Rule, specifically addressing its limitations when applied to autonomic systems. His work demonstrated that the classical power-law form of Rent’s Rule is valid only under the restrictive conditions where the system contains a large number of blocks (designs), and the number of internal components in a block is much smaller than the total number of components () in the entire system .
The generalized formulation, referred to as the New Rule (or Tetelbaum's Law), extends the applicability of the scaling law across the entire range of partition sizes, including the problematic Rent's Region II. The New Rule is expressed as :
Where: is the number of external terminals for the block partition; is the total number of components in the system; is the number of components in the block partition; represents the average number of pins of a component in the system; and is the generalized Rent exponent, derived by the described graph-partitioning method.
Key Behavioral Cases
The following boundary conditions illustrate the behavior of the New Rule, confirming its consistency with physical constraints and highlighting the overestimation inherent in the classical formulation:
• Case 1: Single Component (). When a block contains a single component, the New Rule simplifies to , which is identical to the behavior of Rent’s Rule.
• Case 2: Maximum Partition (). When the system is divided exactly in half, the New Rule yields the maximum terminal count. By contrast, the classical Rent’s Rule, , continues to increase as increases, leading to significant overestimation for large .
• Case 3: Full System (). When the block contains all system components, , resulting in . This accurately reflects the physical reality that the entire system (if autonomic) has no external signal terminals, thereby explicitly modeling the crucial Rent's Region II terminal constraint deviation .
Advantages of the New Rule
The New Rule provides several key advantages that address the limitations of the classical power law:
• Full-Range Analysis: It permits the accurate analysis of system blocks containing an arbitrary number of components.
• Improved Accuracy: Comparisons between theoretical predictions and empirical data from 28 complex electronic systems demonstrated that terminal count estimations using the New Rule are approximately more accurate than those obtained with Rent’s Rule.
• Physical Derivation: The constants and can be derived directly from the properties of actual designs and systems.
• Interconnection Estimation: The New Rule enables the accurate estimation of interconnection length distribution for design optimization.
________________________________________
4. Conclusion
The complexity of modern electronic systems necessitates robust, predictive models for interconnect planning and resource allocation. Rent's Rule has served as a cornerstone for this task, offering a simple yet powerful power-law framework for relating logic complexity to communication demand. However, the rule's inherent empirical limitations—specifically the breakdown at system-level constraints (Region II) and its inaccuracy for heterogeneous architectures—render it increasingly insufficient for the challenges of advanced VLSI and system design.
The proposed New Rule (Tetelbaum's Law) represents a critical generalization that resolves these long-standing issues. By explicitly incorporating the total number of system components () into the formulation, the New Rule accurately models the terminal count across the entire spectrum of partition sizes. Its mathematical form naturally constrains the terminal count to zero when the partition equals the system size (), perfectly capturing the physical I/O constraints that define Rent's Region II. Furthermore, the proven accuracy improvement over the classical model confirms its superior predictive capability.
This generalized framework allows architects to perform more reliable, full-system interconnect planning a priori. Future work will focus on extending the New Rule to explicitly model non-uniform locality within heterogeneous SoCs, and applying it to non-traditional geometries, such as 3D integrated circuits, where the concept of locality must be defined across multiple physical layers.
________________________________________
5. References
1. Rent, E.F. (1960): Original discovery (often an internal IBM memorandum).
2. Landman, L.A. and Russo, R.L. (1971): "On Pin Versus Block Relationship for Partitions of Logic Graphs," IEEE Transactions on Computers, vol. C-20, no. 12, pp. 1469-1479.
3. Donath, W.E. (1981): "Wire Length Distribution for Computer Logic," IBM Technical Disclosure Bulletin, vol. 23, no. 11, pp. 5865-5868.
4. Heller, W.R., Hsi, C. and Mikhail, W.F. (1978): "Chip-Level Physical Design: An Overview," IEEE Transactions on Electron Devices, vol. 25, no. 2, pp. 163-176.
5. Bakoglu, H.B. (1990): Circuits, Interconnections, and Packaging for VLSI. Addison-Wesley.
6. Sutherland, I.E. and Oosterhout, W.J. (2001): "The Futures of Design: Interconnections," ACM/IEEE Design Automation Conference (DAC), pp. 15-20.
7. Davis, J. A. and Meindl, J. D. (2000): "A Hierarchical Interconnect Model for Deep Submicron Integrated Circuits," IEEE Transactions on Electron Devices, vol. 47, no. 11, pp. 2068-2073.
8. Stroobandt, D. A. and Van Campenhout, J. (2000): "The Geometry of VLSI Interconnect," Proceedings of the IEEE, vol. 88, no. 4, pp. 535-546.
9. TETELBAUM, A. (1995). "Generalizations of Rent's Rule", in Proc. of 27th IEEE Southeastern Symposium on System Theory, Starkville, Mississippi, USA, March 1995, pp. 011-016.
10. TETELBAUM, A. (1995). "Estimations of Layout Parameters of Hierarchical Systems", in Proc. of 27th IEEE Southeastern Symposium on System Theory, Starkville, Mississippi, USA, March 1995, pp. 123-128.
11. TETELBAUM, A. (1995). "Estimation of the Graph Partitioning for a Hierarchical System", in Proc. of the Seventh SIAM Conference on Parallel Processing for Scientific Computing, San Francisco, California, USA, February 1995, pp. 500-502.
Physics KS4: Length and Time by Christiana 1 Created 2024-10-29 Updated 2025-04-18
In physics, measurements help to understand the physical world. Two fundamental quantities we often measure are length and time. Length is defined as the distance between two points. It helps us quantify how far apart objects are, whether that’s measuring the size of a classroom, the height of a building, or the distance between two cities. On the other hand, time refers to the continuous progression of events, allowing us to determine how long a process takes. Time is essential for understanding motion, cycles, and changes in the physical world.
Units of Measurement
To ensure consistency in scientific communication, standardized units of measurement are used. The International System of Units (SI) provides the framework for this.
For measuring length, the SI unit is the metre (m). Smaller lengths can be expressed in millimetres (mm) or centimetres (cm), while larger distances, such as the space between cities or countries, are measured in kilometres (km). For instance, the length of a pencil may be 18 cm, while the distance between London and Manchester is approximately 260 km. Knowing how to convert between these units is important: for example, 1 kilometre is equal to 1,000 metres, and 1 metre is equivalent to 100 centimetres.
When measuring time, the SI unit is the second (s). This unit is widely used in science, particularly for measuring short intervals. For longer durations, minutes (min) and hours (h) are commonly used. For example, it may take you 5 minutes to walk to school, while a football match lasts 90 minutes, which is equal to 1 hour and 30 minutes. In scientific experiments, time intervals are often much shorter, measured in seconds or even fractions of a second. The relationship between units of time is straightforward: 60 seconds make up 1 minute, and 60 minutes make up 1 hour.
Measuring Length
Various tools are used to measure length, depending on the precision required. For everyday measurements, a ruler or tape measure is sufficient. For more precise scientific measurements, devices such as vernier calipers or micrometers are used. These tools allow us to measure length down to fractions of a millimetre. For example, a ruler may tell us that a piece of string is 12 cm long, but a vernier caliper could measure it more precisely, to the nearest tenth of a millimetre, like 12.3 cm.
In physics experiments, it’s crucial to ensure accuracy and precision when measuring length. This can involve repeated measurements and careful observation to minimize errors.
Measuring Time
To measure time intervals, we often use stopwatches or clocks. A stopwatch is particularly useful in experiments where we need to record the exact time something takes to occur, such as the duration of a pendulum’s swing or the time it takes for an object to fall. For instance, if you want to measure how long it takes for a ball to drop from a certain height, you could use a stopwatch to record the fall in seconds.
In modern physics, extremely precise instruments like atomic clocks are used to measure time with remarkable accuracy. These clocks can measure time intervals to a fraction of a second, and they are used for highly sensitive experiments, such as those involving the speed of light or synchronization in satellite systems.
Practical Applications of Length and Time
For instance, when studying the motion of objects, knowing how far something has moved (length) and how long it took (time) is fundamental to calculating speed. In technology, precise time measurements are important for synchronization in communication systems, while accurate length measurements are key in construction, engineering, and manufacturing processes.
Example Questions
1. Convert 5.5 kilometres into metres.
2. How many seconds are there in 2 hours?
3. You have a ruler marked in centimetres. If a pencil measures 14.5 cm, how long is it in millimetres?
4. Using a stopwatch, you record the time it takes for a marble to roll down a ramp as 3.2 seconds. How would you express this time in milliseconds?
5. A cyclist travels 24 kilometres in 2 hours. What is the cyclist’s average speed in kilometres per hour (km/h)?
6. If a pendulum takes 2.5 seconds to complete one swing, how many swings will it complete in 1 minute?
Admissible Heuristic by Petros Katiforis 1 Created 2024-09-24 Updated 2025-04-18
In the context of an A* search, a heuristic function is said to be admissible if it does not overestimate the cost to reach the goal. Such functions can also be viewed as being "optimistic".
When using an admissible heuristic, A* is guaranteed to return a cost-optimal solution, i.e. the best path. Let's prove it by contradiction:
Assume that the algorithm returned a path, the cost of which is greater than that of the optimal path . Let's call the cost of the path that was followed, and the cost of , noting that . First, we can safely assume that at least one node in was not expanded during the algorithm's execution (if all nodes of were expanded, then would have been chosen instead since that would lead to a lower path cost). Without loss of generallity, let's take the first occurance of such unexpanded node and name it "n". Let's call the actual cost from n to the destination and define as the cost of the optimal path starting from the origin all the way to n. Remember that A* is a best-first search on the value of , so for all unexpanded nodes. Here's what we've got:
Now note that is equal to "the cost to reach n following the optimal path" + "the cost to reach the goal starting from n, following the optimal path". That's just equal to the total cost of the optimal path! So, both and hold simultaneously which obviously constitutes a contradiction.
[1]: Remember than (our heuristic) is just a hint to prioritize certain expansions over others. When everything is expanded however, is the sole metric that will be considered, which will always lead to the optimal path being selected, that being .
Glass / Glass as a Non-classical State of Matter by pioyi 6 Created 2024-09-20 Updated 2025-04-18
A common misconsception suggests that glass is a liquid of high viscosity. This not the case. Glass is its own distinct state of matter that doesn't coincide with any other classical one. Every liquid (except Helium) can be turned into glass, if a sufficiently rapid cooling takes place. This process is called vitrification. When a liquid is cooled (water for example) it normally goes through the process of crystallization. We say that the water is frozen as ice forms which has a very specific structure that is characterized by its stability (crystalline solid). If the cooling happens quickly enough, the water molecules don't have the opportunity to occupy the lowest energy sites and this is how the amorphous solids forms.
Annealing illustrates this phenomenon. When glass is cooled down in order to solidify, this process must be done during a specific time interval in order for the molecules to have enough time to position themselves in a more stable manner. If during the glass making process, the produced glass has been solidified too rapidly, then the stress present in the solid makes it too fragile to the point where it can rupture/shatter even during handling. By reheating the glass and slowly droping the temperature again, we ensure that the glass object has gained a much more stable structure.
As crystalline solids have a melting point, amorphous ones have a glass transition temperature which surprisingly depends on thermal history (how rapidly was the former liquid made into glass?). Around this temperature point, the viscosity of the glass increases rapidly and can be classified as a solid (under classical terms). It should be noted that the viscosity as well as other properties of the substance made into glass, present a continuous change as the temperature changes. This is not the case with "ordinary" freezing as liquid water turns spontaneously into a solid in a discontinuous manner (during the freezing process the temperature stays constant. Immediately below all the water has turned into ice).
The glass state is metastable and its transition to a crystalline solid is thermodynamically favoured although kinetically inert.
Investors are becoming increasingly concerned about the possibility of losing money in the ever-changing world of cryptocurrencies due to a variety of issues like hacking, frauds, or basic human mistake. Selecting a trustworthy and knowledgeable crypto recovery specialist becomes crucial in such risky circumstances. REFUND POLICI RECOVERY SERVICE is a well-known American company that stands out for its proficiency, impressive client success rate, and positive customer reviews.
How can I recover my stolen bitcoin from an investment scam?
What is the best recovery company to help me recover my stolen Bitcoins?
How to Hire a Hacker to Recover Stolen Crypto/Bitcoin?
Can a hacked crypto be recovered? Yes, Go to REFUND POLICI RECOVERY SERVICE
Best Cryptocurrency Recovery Company
info:
Email address: Refunddpolici @Gmail. com)
WhatsApp: +1, 605, 963.9055 / +1 ( 972. 9 9 8 2 7 5 5
Telegram - @ Refunddpolici
Website- refundpolici.wixsite.com/refund-policy
Ola mundo by Juan 1 Created 2024-08-31 Updated 2025-04-18
Facebook .com
hacck facebook.pyp
autor:juan Flores
Home by Matthew Barnes 1 Created 2023-05-27 Updated 2025-04-18
Welcome to my home page!
Scribe by rtnf 1 Created 2022-11-29 Updated 2025-04-18
A scribe is a person who serves a professional copyist. The work of scribes can involve copying manuscripts and other texts as well as secretarial and administrative duties such as the taking of dictation and keeping of business, judicial and historical records for kings, nobles, temples and cities. The profession has developed into public servants, journalists, accountants, bookkeepers, typists, and lawyers.
One of the most important professionals in ancient Egypt was a person educated in the arts of writing and arithmetic. Scribes were considered part of the royal court, were not conscripted into the army, did not have to pay taxes and were exempt from the heavy manual labor required of the lower classes. Sons of scribes were brought up in the same scribal tradition, sent to school and inherited their fathers' positions upon entering the civil service. Much of what is known about ancient Egypt is due to the activities of its scribes and the officials. Monumental buildings were erected under their supervision, administrative and economic activities were documented by them, and stories from Egypt's lower classes and foreign lands survive due to scribes putting them in writing.
In addition to accountancy and governmental politicking, the scribal professions branched out into literature. The first storeis were probably religious text. Other genres evolved, such as wisdom literature, which were collections of the philosophical sayings from wise men. These contain the earliest recordings of societal thought and exploration of ideas in some length and detail.
In the Middle Ages, every book was made by hand. Specially trained scribes had to carefully cut sheets of parchment, make the ink, write the script, bind the pages and create a cover to protect the script. This was all accomplished in a writing room called a scriptorium which was kept very quiet so scribes could maintain concentration. A large scriptorium may have up to 40 scribes working. Scribes woke to morning bells before dawn and worked until the evening bells, with a lunch break in between. They worked every day except for the Sabbath. Scribes were only able to work in daylight, due to the expense of candles.
The scribe was a common job in medieval European towns during the 10th and 11th centuries. Many were employed at scriptoria owned by local schoolmasters or lords. These scribes worked under deadlines to complete commissioned works such as historic chronicles or poetry.
These scribes would meticulously record the information presented in the texts, but not mindlessly. In the case of herbals, for instance, there is evidence that the monks improved upon some texts, retracted textual errors, and made the text particularly relevant to the area in which they lived. Some scribes even went so far as to grow some of the plants included in the texts. They had little room or patience to disseminate the imaginary plants. The writers truly restricted themselves to only include practical information.
Meanwhile, in the case of bestiaries, the scribes generally copied and cited previous texts to pass them on. Unlike the herbals, the scribes could not grow an animal in their garden, so largely the information taken from the bestiaries was taken at face value.
In the 13th century, Paris was the first city to have a large commercial trade of manuscripts, with book producers being commissioned to make specific books for specific people. Paris had a large enough population of wealthy literate persons to support the livelihood of people producing manuscripts.
Wait, why the paragraph break doesnt work at all?
About Programthink by anonymousajo1u12nglajgk 1 Created 2022-11-21 Updated 2025-04-18
I can not join your movement because I really have more reasons than other ones. For privacy, I can't explain that. And, For me, I know that for the current situations in China, I can't do anything. For China, I think only solution is revolution not reform. The protesters will die with nobody know that without tech. People could try, but it is a Way of Sacrifice.
According the report, this organization help programthink arrested. I research that for some days, and the evidences express its correctness.
净网志愿者协会, may one day you will be captured by them. But you are foreigner, they don't have a way to do that.
净网志愿者协会, it is a semi-official organization but with high-tech hacker technology. You know, in China, high-tech hackers will not be an offical. Because it is a semi-official organization with many young little-pink engineer, the tracking for programthink, it could be done.
They are actually not wumao, but they may be more formidable opponent. But you live aboard, they can't do anything and officals will not always listen their views.
Why do I think its claim for programthink's arrest is credible? Because some inference.
The organization says programthink is 马勇康.
github.com/programthink/zhao/issues/418
one year passed, 2022/05, offical media(only media) say 马某某,煽动分裂国家、煽动颠覆国家政权
baijiahao.baidu.com/s?id=1731886110528251526&wfr=spider&for=pc
the 马 first name only has 1% population in China. Coincidence? And according the report's keyword: 浙江温州, 科技有限公司,“洗脑”,“人生导师”,顽固
just like a copycat keyword for programthink blog and it conforms characteristics.
and the organization say 马勇康 is 山东人, the offical media say he is 浙江人
浙江(ZheJiang province) have less 马 than 山东.
And programthink family's post say programthink lastly has a business travel to a city in eastern China, you see 温州,浙江,上海,all keyword is about the eastern China. 温州 is a interesting keyword, if you search programthink's issues, it has a strange 温州 people, but no other Zhejiang province's county.
And other crime such as 政治纲领, as I know for CCP cop, they need some more outstanding achievement. 继续调查中,because the CCP cop needs more news by the mouth even it is not neutral. And, it is possible that programthink think he could take the all blames on his shoulder maybe.
These inference may is only inference by brain ———— we never know what is a truth.
www.bilibili.com/video/BV14S4y1T7RT/?spm_id_from=333.999.0.0
净网志愿者协会 mentions programthink in the video.
I only know that they may be the not many real high-tech little-pink and hackers and really can do that. They are more terrible cop, even sometimes, gov will not listen them. I don't know in 2022 the organizations like it will do what.
Sometimes I think what is better? For my reasons I know I couldn't join yours. And I can't keep "doublethink"(ps: I'm not an official, official never doublethink). The only difference between China and 1984 may is the war is peace. China gov officials say China love peace, and in fact, CCP indeed take less war than other countries. But it is Chinese nature, may CCP also keep something.
CCP is not worse like Russia, so it is more difficult, for any reform. And reform may not work too fast or really work, only revolution.
Linear algebra by Donald Trump 29 Created 2022-11-02 Updated 2025-04-18
This is a section about Linear algebra!
Linear algebra is a very important subject about which there is a lot to say.
For example, this sentence. And then another one.
Fundamental theorem of calculus by Donald Trump 29 Created 2022-11-02 Updated 2025-04-18
This is a section about Fundamental theorem of calculus!
Fundamental theorem of calculus is a very important subject about which there is a lot to say.
For example, this sentence. And then another one.

There are unlisted articles, also show them or only show them.