Michael Theroux on Substack

Posted in Uncategorized on September 3, 2025 by Michael Theroux

https://michaeltheroux.substack.com/

Foliotroves to Publish New Book on Simulation Theory

Posted in Uncategorized with tags , , , , on August 25, 2025 by Michael Theroux

The Architect’s Code: My View on Simulation Theory

by Michael Theroux

For many years, I have explored the possibility that our reality is a meticulously crafted simulation. This is not a flight of fancy, but a serious line of inquiry that I’ve pursued through the lens of mathematics and physics. Many see simulation theory as a sci-fi concept, but I believe it offers a compelling framework for understanding some of the most fundamental aspects of our universe.

My argument is centered on what I call the “Trinity of Limitation” – three universal constants that may serve as evidence of a designed system: the golden mean (φ), pi (π), and the speed of light (c).

  • The Golden Mean (φ ≈ 1.61803): This constant appears everywhere, from the spirals of galaxies to the patterns of seashells. In a simulated universe, a constant like this could act as a “recursive constraint,” a hard-coded rule that optimizes the creation of structures while conserving computational resources. It’s the simulation’s elegant shortcut for generating complexity efficiently.
  • Pi (π ≈ 3.14159): We find pi in every circle, every sphere. It is the geometric constant of a closed system. In a simulation, pi could be a parameter that ensures the stability of geometry and allows for the creation of intricate patterns without the need for infinite precision. It prevents the system from breaking down into chaotic, un-simulatable messiness.
  • The Speed of Light (c = 299,792,458 m/s): The ultimate speed limit of the universe. This isn’t just a physical law; it could be the “universal boundary condition” of the simulation. It sets a maximum processing speed and maintains causality, preventing information from traveling faster than the program can render it.

Together, these constants form a framework that not only defines our physical world but also constrains our consciousness and technological progress. In my view, the consistent failure of humanity to transcend these limits – to break the speed of light, for example – is not an accidental quirk of physics. Instead, it suggests that the simulation’s code is robust and deliberately designed to keep us within its boundaries.

I do believe the simulation hypothesis is a serious and intellectually valuable concept, but It’s not about proving that a super-intelligent civilization is running a computer program. It’s more about recognizing that the fundamental constants of our reality could be clues left by an “architect” and that our greatest scientific limitations may simply be the laws of the code we live in.

Earthquakes and Solar Activity: Correlation Analysis and Experimental Predictions

Posted in Uncategorized on July 1, 2025 by Michael Theroux

Here’s an experimental application I’ve been working on for many years. It is still a work in progress, and all recommendations and suggestions are welcome. It is a very active project and may change regularly.

Earthquakes and Solar Activity: Correlation Analysis and Experimental Predictions

The Finite Veil: Mathematical Limits and the Simulated Cosmos

Posted in Uncategorized with tags , , , , on May 28, 2025 by Michael Theroux

The Finite Veil: Mathematical Limits and the Simulated Cosmos

by Michael Theroux
May 28, 2025

Preface: The Framework of a Simulated Universe

The simulation hypothesis, proposed by philosopher Nick Bostrom (2003), posits that our reality may be a computational construct, akin to a highly advanced simulation run by an unknown entity or civilization. This paper explores this hypothesis through the lens of three fundamental mathematical constants: the golden mean (φ ≈ 1.6180339887), pi (π ≈ 3.1415926535), and the speed of light (c = 299,792,458 m/s). These constants, ubiquitous in mathematics and physics, may serve as evidence of a designed system, where their fixed values act as computational boundaries, preventing entities within the simulation—namely, humans—from transcending its limits.

In computational terms, a simulation requires rules to maintain stability and efficiency. Constants like φ, π, and c could represent optimized parameters, hard-coded into the universe’s algorithms to govern structure, geometry, and causality. This paper argues that these constants form a framework that not only defines the physical world but also constrains human consciousness and technological progress, suggesting a deliberate design by an architect (a term used here to denote the simulation’s creator, whether a programmer, intelligence, or process). By examining each constant’s properties, prevalence, and implications, we aim to assess whether they indicate a simulated reality and why attempts to surpass these limits consistently fail. The following sections combine mathematical analysis, physical principles, and computational theory to explore this hypothesis, offering a scientific perspective on our place within a potentially finite cosmos.

Part 1: The Golden Mean – A Recursive Constraint in the Simulation

The golden mean, φ = (1 + √5)/2 ≈ 1.6180339887, is a mathematical constant defined by the equation φ = 1 + 1/φ, leading to a recursive relationship that manifests in the Fibonacci sequence (1, 1, 2, 3, 5, 8, …), where the ratio of consecutive terms approaches φ. This constant appears extensively in nature, from the arrangement of leaves (phyllotaxis) to the spirals of galaxies, and in human constructs like architecture and art. Its prevalence suggests a fundamental role in the structure of our reality, potentially as a computational efficiency mechanism within a simulated universe.

In biological systems, φ governs optimal packing and growth patterns. For example, the spiral arrangement of seeds in a sunflower follows Fibonacci numbers, maximizing space and resource distribution. This optimization could reflect a simulation’s need to conserve computational resources, using φ to generate complex structures with minimal coding. Mathematically, φ’s irrationality ensures that its decimal expansion is infinite and non-repeating, making it an efficient constant for generating self-similar patterns without requiring infinite precision in the simulation’s code.

From the perspective of the simulation hypothesis, φ’s recursive nature—where each iteration depends on the previous—may serve as a computational boundary. It creates a self-referential loop, limiting the system’s ability to produce outcomes beyond its predefined structure. Human attempts to exploit φ, such as in algorithms for pattern recognition or fractal modeling, remain constrained by its fixed value. Efforts to transcend this constant, such as creating non-φ-based growth models, are either computationally inefficient or incompatible with observed natural systems, suggesting that φ is a hard-coded limit designed to prevent deviation from the simulation’s rules.

The golden mean’s ubiquity and mathematical properties indicate that it may be a deliberate feature of the simulation, optimizing form and function while enforcing a structural boundary. Its presence in both natural and human systems suggests that the architect intended φ to shape our perception and creativity, ensuring that our endeavors remain within the simulation’s computational framework.

Part 2: Pi – The Geometric Constant of a Closed System

Pi, defined as π ≈ 3.1415926535, is the ratio of a circle’s circumference to its diameter in Euclidean geometry. As an irrational number, π’s decimal expansion is infinite and non-repeating, yet its value is fixed and universal across physical systems. Pi appears in equations governing wave mechanics, orbital dynamics, and quantum systems, making it a cornerstone of the universe’s geometric and physical structure. Within the simulation hypothesis, π may function as a computational constant that enforces cyclic stability, ensuring that the universe operates as a closed, self-consistent system.

In physics, π emerges in the equations of general relativity (e.g., Einstein’s field equations) and quantum mechanics (e.g., the Schrödinger equation), where it defines the geometry of spacetime and wavefunctions. Its irrationality suggests a high degree of complexity in the simulation’s code, allowing for intricate patterns without requiring the system to resolve π to a finite number of digits. This property could be a design choice, enabling the simulation to model circular and periodic phenomena—planetary orbits, electromagnetic waves, atomic structures—while maintaining computational efficiency.

From a simulation perspective, π’s role in creating cyclic systems (e.g., orbits, oscillations) suggests it is a mechanism to prevent divergence from the programmed framework. Any attempt to alter π’s value would destabilize these systems, as circular geometries and periodic behaviors depend on its constancy. Human efforts to compute π to extreme precision (e.g., trillions of digits) reveal its infinite complexity but yield no practical means to transcend its role. Proposals to redefine geometry with non-π constants (e.g., in non-Euclidean spaces) remain theoretical and incompatible with the observed universe, reinforcing π as a fixed boundary.

The simulation hypothesis posits that π’s irrational yet constant nature is a deliberate feature, ensuring that the universe’s geometric and cyclic properties remain stable. By embedding π into the fabric of reality, the architect created a system where complexity is bounded, preventing entities within the simulation from accessing or altering the underlying code.


Part 3: The Speed of Light – The Universal Boundary Condition

The speed of light in a vacuum, c = 299,792,458 m/s, is a fundamental constant in Einstein’s theory of special relativity, defining the maximum speed at which information and matter can travel. It governs causality, ensuring that cause precedes effect, and sets the scale of spacetime through the Lorentz transformation. Within the simulation hypothesis, c may represent a computational boundary condition, limiting the processing speed of the simulation and preventing entities from accessing regions beyond its programmed framework.

Relativity demonstrates that c is absolute: as an object with mass approaches c, its energy requirements approach infinity, making acceleration beyond c impossible. This limit is encoded in the equation E = mc² and the relativistic mass increase formula, m = m₀/√(1 – v²/c²). In computational terms, c could be analogous to a clock rate in a simulation, setting the maximum frequency at which events can be processed. Exceeding c would require infinite computational resources, a scenario incompatible with a finite system.

Human attempts to bypass c—through concepts like wormholes, Alcubierre drives, or quantum entanglement—face significant theoretical and practical barriers. Wormholes require exotic matter with negative energy, which has not been observed; Alcubierre drives demand unattainable energy scales; and entanglement does not permit faster-than-light communication due to quantum no-signaling theorems. These failures suggest that c is a hard-coded limit, designed to maintain the simulation’s integrity by restricting access to its boundaries.

The speed of light’s role in defining spacetime and causality indicates that it is a fundamental parameter of the simulation, ensuring that all interactions remain within a predictable, computationally manageable framework. Its precise value, fixed by definition in the International System of Units, underscores its role as an unalterable constant, reinforcing the hypothesis that the architect intended to confine entities within the simulation’s temporal and spatial limits.


Part 4: The Simulation Hypothesis – A Computational Framework

The simulation hypothesis, formalized by Nick Bostrom (2003), argues that at least one of the following is true: (1) advanced civilizations never reach the technological capacity to create simulations, (2) they choose not to, or (3) we are almost certainly living in a simulation. Given the computational power of modern systems and projections of future capabilities (e.g., Moore’s Law, quantum computing), the third scenario is statistically plausible. This Part synthesizes the roles of the golden mean (φ), pi (π), and the speed of light (c) as evidence of a computational framework, suggesting that these constants form a system of constraints designed to maintain the simulation’s stability and prevent transcendence.

Bostrom’s argument hinges on the idea that a sufficiently advanced civilization could simulate conscious entities within a computational environment. The constants φ, π, and c may serve as optimized parameters in this environment. The golden mean’s recursive efficiency could minimize computational overhead in modeling natural systems; π’s irrationality allows for complex geometric and periodic behaviors without requiring infinite precision; and c’s fixed value ensures causal consistency, preventing computational errors like causality violations. Together, these constants form a “trinity of limitation,” a set of rules that define the simulation’s structure while restricting its inhabitants.

Human efforts to transcend these constants—through advanced mathematics, physics, or technology—consistently encounter barriers. Attempts to redefine φ in biological or computational models fail to match nature’s efficiency; calculations of π beyond practical precision yield no new insights; and proposals to exceed c violate energy conservation or require unphysical conditions. These failures suggest that the simulation’s code is robust, designed to resist attempts to access or alter its underlying framework.

The constants’ ubiquity and precision support the hypothesis that they are deliberate features of a simulated universe. Their interdependence—φ in growth patterns, π in geometric systems, c in spacetime dynamics—suggests a cohesive computational design, where each constant reinforces the others to maintain a closed system. This framework implies that the architect intended to create a self-contained reality, where transcendence is impossible without altering the fundamental code, a task beyond human capability within the simulation’s rules.


Part 5: The Human Condition – Consciousness Within the Code

Human consciousness, defined as the subjective experience of awareness, thought, and perception, is a complex emergent phenomenon arising from neural networks in the brain, comprising approximately 86 billion neurons and 10¹⁵ synapses. Within the simulation hypothesis, consciousness may be a computational subroutine, designed to process sensory input and generate behavior within the constraints of the simulation’s constants: the golden mean (φ), pi (π), and the speed of light (c). This Part analyzes how these constants shape consciousness and limit human attempts to transcend the simulation’s framework.

The golden mean influences biological structures underlying consciousness. For example, the fractal-like branching of neural networks and vascular systems approximates φ, optimizing information and resource distribution. This efficiency suggests that the architect embedded φ to constrain cognitive architecture, ensuring that consciousness operates within computationally efficient boundaries. Attempts to design artificial neural networks without φ-like patterns often result in suboptimal performance, reinforcing its role as a fixed parameter.

Pi shapes the sensory and cognitive experience of cycles and periodicity. Visual and auditory perception, governed by wave mechanics (e.g., Fourier transforms), relies on π to process periodic signals like light and sound. The brain’s oscillatory patterns, such as alpha (8–12 Hz) and theta (4–8 Hz) waves, also depend on π-based mathematics, suggesting that the simulation’s geometric rules constrain how consciousness interprets the world. Efforts to redefine sensory processing outside π’s framework are incompatible with observed neural function, indicating a coded limit.

The speed of light restricts the temporal scope of consciousness. Sensory input, such as visual perception, is limited by c, as light from external objects takes time to reach the observer (e.g., 8.3 minutes from the Sun). Cognitive processes, while faster than c at the neural level (due to subatomic interactions), are ultimately bound by the simulation’s causal structure, which c enforces. Proposals for faster-than-light perception or communication (e.g., via quantum entanglement) are constrained by no-signaling theorems, suggesting that consciousness cannot operate beyond the simulation’s temporal boundaries.

The tension between human aspirations and these constraints defines the human condition. Consciousness drives us to seek transcendence—through science, philosophy, or spirituality—yet φ, π, and c ensure that our efforts remain within the simulation’s framework. For example, artificial intelligence models approaching human-like cognition are still bound by these constants in their algorithmic design and physical implementation. The simulation hypothesis suggests that consciousness itself is a product of the code, designed to explore the simulation’s possibilities while remaining confined by its rules, a balance that maintains computational stability while allowing for subjective experience.


Epilogue: Beyond the Veil? – Probing the Limits of the Simulation

The simulation hypothesis posits that our universe is a computational construct, with the golden mean (φ), pi (π), and the speed of light (c) as fundamental constants that enforce its boundaries. This paper has argued that these constants form a system of constraints, preventing humanity from transcending the simulation’s framework. However, a critical question remains: did the architect embed clues within these constants that suggest a pathway beyond the simulation, or are they immutable barriers designed to maintain a closed system?

Scientifically, the constants’ properties offer tantalizing hints. The golden mean’s fractal-like presence in nature suggests a recursive algorithm that could, in theory, point to a meta-structure beyond the simulation. For example, fractal patterns in chaotic systems (e.g., Mandelbrot sets) exhibit self-similarity that might reflect a higher-level computational framework. However, no empirical evidence suggests a practical means to exploit φ for transcendence, as its recursive nature reinforces the simulation’s internal consistency.

Pi’s irrationality presents a similar paradox. Its infinite, non-repeating digits could encode information about the simulation’s deeper structure, akin to a compressed data set in computational theory. Yet, efforts to extract such information—through high-precision calculations or alternative geometric frameworks—have yielded no breakthroughs, suggesting that π’s complexity is a feature to maintain system stability rather than a clue to escape.

The speed of light, as a boundary condition, shows anomalies in extreme conditions, such as near black holes or in quantum gravity theories, where spacetime appears to bend or break. Proposals like loop quantum gravity or string theory suggest that c might not be absolute at the Planck scale (10⁻³⁵ m), hinting at a possible interface with the simulation’s underlying code. However, these theories remain speculative, and experimental tests (e.g., at CERN) have not produced evidence of faster-than-light phenomena, reinforcing c’s role as a limit.

Human attempts to transcend these constants—through advanced computing, theoretical physics, or consciousness research—consistently encounter barriers, suggesting that the simulation’s design is robust. If clues exist, they may lie in the interplay of these constants, such as their unexpected convergence in certain physical systems (e.g., black hole entropy, where π and c appear together). Alternatively, the architect may have encoded transcendence in the act of inquiry itself, where the pursuit of knowledge within the simulation’s limits generates meaning without requiring escape.

In conclusion, the golden mean, pi, and the speed of light define a computational framework that constrains our reality. While their properties suggest the possibility of hidden clues, no empirical or theoretical pathway to transcendence has been identified. The simulation hypothesis remains a compelling framework for understanding our universe, urging continued scientific inquiry into its constants and their implications. Whether we are bound forever or destined to glimpse beyond the veil, the search itself defines our place within the cosmos, a testament to the human drive to understand the code that shapes us.


Bibliography for The Finite Veil: Mathematical Limits and the Simulated Cosmos

 Preface: The Framework of a Simulated Universe

– Bostrom, N. (2003). Are you living in a computer simulation? Philosophical Quarterly, 53(211), 243–255.  

  Annotation: This seminal paper introduces the simulation hypothesis, providing the foundational argument that our reality may be a computational construct. It is central to the preface’s discussion of the simulation hypothesis and the role of constants as potential evidence of a designed system.

– Tegmark, M. (2014). Our mathematical universe: My quest for the ultimate nature of reality. Knopf.  

  Annotation: Tegmark’s book explores the idea that the universe is fundamentally mathematical, supporting the preface’s claim that constants like φ, π, and c could be computational parameters in a simulated reality.

– Zuse, K. (1969). Rechnender Raum [Calculating Space]. MIT Technical Translation AZT-70-164-GEMIT.  

  Annotation: Zuse’s early work on digital physics proposes that the universe operates like a computational system, providing a historical basis for the preface’s discussion of a simulated cosmos.

 Part 1: The Golden Mean – A Recursive Constraint in the Simulation

– Livio, M. (2002). The golden ratio: The story of phi, the world’s most astonishing number. Broadway Books.  

  Annotation: Livio provides a comprehensive overview of the golden mean’s mathematical properties and its prevalence in nature and art, supporting Part 1’s analysis of φ as a computational efficiency mechanism in the simulation.

– Douady, S., & Couder, Y. (1996). Phyllotaxis as a dynamical self-organizing process. Journal of Theoretical Biology, 178(3), 255–274.  

  Annotation: This study explains the golden mean’s role in phyllotaxis (e.g., sunflower seed arrangements), providing empirical evidence for Part 1’s argument that φ optimizes natural systems in a simulated universe.

– Falconer, K. (2013). Fractal geometry: Mathematical foundations and applications (3rd ed.). Wiley.  

  Annotation: Falconer’s work on fractals and self-similar patterns connects the golden mean’s recursive nature to computational efficiency, supporting Part 1’s hypothesis that φ is a coded constraint.

 Part 2: Pi – The Geometric Constant of a Closed System

– Arndt, J., & Haenel, C. (2006). Pi unleashed. Springer.  

  Annotation: This book details pi’s mathematical properties and computational challenges, supporting Part 2’s discussion of π’s irrationality as a feature of the simulation’s complexity and stability.

– Weinberg, S. (1992). Dreams of a final theory: The scientist’s search for the ultimate laws of nature. Pantheon Books.  

  Annotation: Weinberg’s exploration of fundamental constants in physics, including π’s role in wave mechanics and relativity, underpins Part 2’s argument that π enforces cyclic stability in the simulation.

– Borwein, J. M., & Bailey, D. H. (2008). Mathematics by experiment: Plausible reasoning in the 21st century. A K Peters/CRC Press.  

  Annotation: This book discusses computational approaches to π, highlighting its infinite complexity and supporting Part 2’s claim that π’s irrationality prevents transcendence within the simulation.

 Part 3: The Speed of Light – The Universal Boundary Condition

– Einstein, A. (1905). On the electrodynamics of moving bodies. Annalen der Physik, 17(10), 891–921.  

  Annotation: Einstein’s original paper on special relativity establishes the speed of light as a universal constant, providing the foundation for Part 3’s analysis of c as a computational boundary.

– Magueijo, J. (2003). Faster than the speed of light: The story of a scientific speculation. Perseus Publishing.  

  Annotation: Magueijo’s exploration of theories challenging the speed of light’s constancy supports Part 3’s discussion of why surpassing c is impossible, reinforcing its role as a simulation limit.

– Thorne, K. S. (1994). Black holes and time warps: Einstein’s outrageous legacy. W. W. Norton & Company.  

  Annotation: Thorne’s work on relativity and exotic phenomena (e.g., wormholes) provides context for Part 3’s evaluation of attempts to bypass c, highlighting their theoretical and practical barriers.

 Part 4: The Simulation Hypothesis – A Computational Framework

– Bostrom, N. (2003). Are you living in a computer simulation? Philosophical Quarterly, 53(211), 243–255.  

  Annotation: Repeated from the preface, this paper is central to Part 4’s formal introduction of the simulation hypothesis and its implications for a computationally bounded universe.

– Lloyd, S. (2006). Programming the universe: A quantum computer scientist takes on the cosmos. Knopf.  

  Annotation: Lloyd’s work on the universe as a quantum computer supports Part 4’s argument that φ, π, and c are optimized parameters in a computational framework.

– Deutsch, D. (1997). The fabric of reality: The science of parallel universes—and its implications. Penguin Books.  

  Annotation: Deutsch’s exploration of computational and physical reality provides a theoretical basis for Part 4’s synthesis of constants as evidence of a designed, closed system.

 Part 5: The Human Condition – Consciousness Within the Code

– Koch, C. (2019). The feeling of life itself: Why consciousness is widespread but can’t be computed. MIT Press.  

  Annotation: Koch’s work on the neural basis of consciousness supports Part 5’s analysis of consciousness as an emergent property within the simulation, constrained by φ, π, and c.

– Tononi, G., & Edelman, G. M. (1998). Consciousness and complexity. Science, 282(5395), 1846–1851.  

  Annotation: This study links consciousness to neural complexity, providing evidence for Part 5’s argument that cognitive processes are shaped by the simulation’s computational limits.

– Penrose, R. (1989). The emperor’s new mind: Concerning computers, minds, and the laws of physics. Oxford University Press.  

  Annotation: Penrose’s exploration of consciousness and physical laws supports Part 5’s discussion of how φ, π, and c constrain cognitive and perceptual boundaries.

 Epilogue: Beyond the Veil? – Probing the Limits of the Simulation

– Smolin, L. (2006). The trouble with physics: The rise of string theory, the fall of a science, and what comes next. Houghton Mifflin.  

  Annotation: Smolin’s critique of speculative theories like string theory supports the epilogue’s cautious evaluation of quantum gravity as a potential clue to transcending the simulation.

– Rovelli, C. (2004). Quantum gravity. Cambridge University Press.  

  Annotation: Rovelli’s work on loop quantum gravity provides a theoretical basis for the epilogue’s discussion of anomalies at the Planck scale as possible hints of the simulation’s deeper structure.

– Barrow, J. D. (2002). The constants of nature: The numbers that encode the deepest secrets of the universe. Pantheon Books.  

  Annotation: Barrow’s analysis of physical constants supports the epilogue’s reflection on whether φ, π, and c encode clues to a reality beyond the simulation, emphasizing their role in defining our universe.

The Russian Hacking Diatribe, and Why It Is Complete Agitprop Nonsense

Posted in Uncategorized on September 6, 2024 by Michael Theroux

Originally published in 2600 Magazine, October 2017 – and relevant today in light of new evidence.

The Russian Hacking Diatribe, and Why It Is Complete Agitprop Nonsense

By Michael Theroux

There is a necessity for large corporate interests controlling the government to create agitation once again with Russia and other enemy states in order to gain the support of the people to funnel massive funds to the Military Industrial Complex. It’s a plausible tactic where the politicians of this country are sponsored by giant defense corporations. If they’re pulling out of active wars, but in desperate need to keep fueling the military-industrial complex that signs their paychecks, they could cleverly revive the Cold War game plan. And, they have.

Recent and past “news” delivered by the MSM – who has wholly embraced the intelligentsia’s claims offered up by the CIA, and now other 3-letter agencies – that a Russian state-sponsored hack of the DNC and the RNC – had an effect in swaying the US’s election results, is patently absurd, and pure agitprop. To date, there is absolutely no conclusive evidence that anything of the sort occurred. The Straw Man tactic has been employed again, and it appears to be working as usual.

The only reason to continually create new bad guys, or conjure up the old bad guys is to fill the coffers of corporate Department of Defense contractors who lobby the shit out of our government. THEY DON’T WORK FOR US. Our so-called government officials work for the money they get from corporate interests. And, they need those paychecks to keep coming in.

Now, I could go into the sexy details of what it takes to track down a real state hacker (most of what the official rhetoric has to offer is juvenile and pedantic), but it’s pointless when you realize this has nothing to do with hacking. There is a bigger picture here people, and it’s emblazoned with a scarlet letter sewn into the very fabric of our willful unconsciousness. We need to wake up, and not accept this bullshit any longer.

Breakdown of the “So-called” Evidence for Russian Hacking, and the Sad State of Cybersecurity

Was there definitive evidence contained in the JAR (Joint Analysis Report – GRIZZLY STEPPE – Russian Malicious Cyber Activity),  or FireEye’s analysis, “APT28: A WINDOW INTO RUSSIA’S CYBER ESPIONAGE OPERATIONS” that Russian state-sponsored hackers compromised the DNC server with malware, and then leaked any acquired documents to WikiLeaks? Absolutely not. And here’s why:

Let’s first run through the “so-called” evidence – basically two “smoking guns” in the analysis – and a few other questions pertinent to the investigation. I’ll address each point with some technical details and maybe a little common-sense evaluation:

Certain malware settings suggest that the authors did the majority of their work in a Russian language build environment. The malware compile times corresponded to normal business hours in the UTC + 4-time zone, which includes major Russian cities such as Moscow and St. Petersburg. Ultimately, WikiLeaks was the source of the dissemination of the compromised data – where did they acquire it? According to media sources, all 17 US intelligence agencies confirmed that Russian state-sponsored hackers were the source of the attacks. Was this “so-called” hack designed to affect the outcome of the US election?

Let us now address each of these points specifically (some of this may be more technical for the average human – Program or be Programmed):

1. Certain malware settings suggest that the authors did the majority of their work in a Russian language build environment.

APT28 (Advanced Persistent Threat 28) consistently compiled Russian language settings into their malware.

Locale ID                        Primary language Country        Samples

0x0419                            Russian (ru)                       59

0x0409                            English (us)                       27         

0x0000 or 0x0800             Neutral locale                     16       

0x0809                            English (uk)                       1        

By no means is this evidence of anything. It could even be a US-sponsored hack, for that matter, obfuscating its origin by using a Russian build environment. This is pure speculation, and any security researcher knows this has effectively been used by malware authors in the past.

2. The malware compile times corresponded to normal business hours in the UTC + 4-time zone, which includes major Russian cities such as Moscow and St. Petersburg.

The FireEye report states:

“During our research into APT28’s malware, we noted two details consistent across malware samples. The first was that APT28 had consistently compiled Russian language settings into their malware. The second was that malware compile times from 2007 to 2014 corresponded to normal business hours in the UTC + 4 time zone, which includes major Russian cities such as Moscow and St. Petersburg. Use of Russian and English Language Settings in PE Resources include language information that can be helpful if a developer wants to show user interface items in a specific language. Non-default language settings packaged with PE resources are dependent on the developer’s build environment. Each PE resource includes a “locale” identifier with a language ID composed of a primary language identifier indicating the language and a sublanguage identifier indicating the country/region.”

Any malware author could intentionally leave behind false clues in the resources section, pointing to Russia or any other country. These signatures are very easy to manipulate, and anyone with a modicum of Googling skills can alter the language identifier of the resources in PE files. ANY state-sponsored entity could easily obfuscate the language identifier in this way. One could also use online compilers or such an online integrated development environment (IDE) through a proxy service to alter times – indicating that compile times were from any specific region chosen. The information in the FireEye report is spurious at best.

3. Ultimately, WikiLeaks was the source of the dissemination of the compromised data – where did they acquire it? 

Julian Assange, the founder of WikiLeaks, has repeatedly stated that the source of the information they posted was NOT from ANY state-sponsored source – including Russia. In fact, in all of the reports (including the JAR and FireEye) they never once mention WikiLeaks. Strange.

4. According to media sources, all 17 US intelligence agencies confirmed Russian state-sponsored hackers were the source of the attacks.

This is hilarious – many of these 17 agencies wouldn’t know a hack from a leak nor would they have been privy to any real data other than what a couple other agencies reported which was thin and barely circumstantial, and was wholly derived from a third-party security analysis:

  1. Air Force Intelligence
  2. Army Intelligence
  3. Central Intelligence Agency
  4. Coast Guard Intelligence
  5. Defense Intelligence Agency
  6. Department of Energy
  7. Department of Homeland Security
  8. Department of State
  9. Department of the Treasury
  10. Drug Enforcement Administration
  11. Federal Bureau of Investigation
  12. Marine Corps Intelligence
  13. National Geospatial-Intelligence Agency
  14. National Reconnaissance Office
  15. National Security Agency
  16. Navy Intelligence
  17. Office of the Director of National Intelligence

5. Was this “so-called” hack designed to affect the outcome of the US election?

It is clear, even if there were state-sponsored hacks, that the information provided in WikiLeaks had no relation to Russian manipulation of US elections. The information speaks for itself – it is the content of the leaks that is relevant – and it matters not where it came from. DNC corruption is the real issue, and any propaganda agenda designed to direct attention away from the damage the info presents is wholly deflection.

Most of the references used in the JAR report are really from third-party cybersecurity firms looking to “show off” their prowess at rooting out a hacker culprit. This ultimately means money for them. This is the reality of the sad state of security today. Note that not one report mentions that every single one of the compromises was directed at Microsoft operating systems. Why, when everyone knows that Microsoft is the most insecure OS and is specifically targeted by malware authors, state-sponsored or otherwise, do any governments still use it? Fortunately, there are real security researchers out there who see through the smoke and mirrors and aren’t buying the BS handed them by government entities and the media outlets they control.

The Anti-Forensic Marble Framework

With the release of the “Marble Framework” on WikiLeaks, we come upon more evidence that the entire so-called “Russian Hacking” story could very well have been a US state-sponsored hack – and it’s more likely.

From WikiLeaks:

“Marble is used to hamper forensic investigators and anti-virus companies from attributing viruses, trojans and hacking attacks to the CIA. Marble does this by hiding (“obfuscating”) text fragments used in CIA malware from visual inspection. This is the digital equivalent of a specialized CIA tool to place covers over the English language text on U.S. produced weapons systems before giving them to insurgents secretly backed by the CIA.”

CIA Leaks

I’ve been through many of the docs included in Vault 7 and it isn’t anything at all new or revelatory. I called this back in 2005 and detailed much of it back then. Most thought me a kook. Much of what I’ve looked at so far is valid, although it’s very basic info any teenage hacker attending DEFCON would know about.

It’s old crap, and I’d put money on it that the CIA itself “leaked” the data.

And finally, the most recent stories of Russian attempts to hack into U.S. voting systems are even more ridiculous in their claims, and were based exclusively on info from the Department of Homeland Security. Apparently, 21 states as cited by the MSM (in last year’s presidential election) were targeted by “Russian” hackers.  These claims about Russian hacking get ineptly hyped by media outlets, and are almost always based on nothing more than fact-free claims from government officials, only to look completely absurd under even minimal scrutiny by real security experts, because they are entirely lacking in any real evidence.

“In our age there is no such thing as ‘keeping out of politics.’ All issues are political issues, and politics itself is a mass of lies, evasions, folly, hatred and schizophrenia.” 

~ George Orwell

For complete information, please check out the links cited as references below:

http://arstechnica.com/security/2016/12/did-russia-tamper-with-the-2016-election-bitter-debate-likely-to-rage-on/

https://www.codeandsec.com/Sophisticated-CyberWeapon-Shamoon-2-Malware-Analysis

https://www.dni.gov/index.php/intelligence-community/members-of-the-ic

http://www.usatoday.com/story/news/politics/onpolitics/2016/10/21/17-intelligence-agencies-russia-behind-hacking/92514592/

https://www.dni.gov/index.php/intelligence-community/members-of-the-ic

http://www.defenseone.com/technology/2016/12/accidental-mastermind-dnc-hack/134266/

https://www.rt.com/usa/372630-wikileaks-20k-reward-obama/

Yet Another Major Russia Story Falls Apart. Is Skepticism Permissible Yet?

HACKER HISTORY – MDT OR “THE MASS DEPOPULATION TRIO”

Posted in Uncategorized with tags , , , on April 30, 2024 by Michael Theroux

HACKER HISTORY – MDT OR “THE MASS DEPOPULATION TRIO”

by Doc Slow

Back in 1998, under a pseudonym, I wrote an article called, “Y2k and New Industry of Hysteria.” One of my colleagues rightfully proclaimed that the ‘Industry of Hysteria’ was nothing “new,” and she was correct in thinking so. So correct in fact that her disparagement of my use of the word “new” in the title of the article, forced my proposal to her. We were quickly married, and shortly thereafter, quickly divorced. It is of little consequence regarding the forthcoming story.

In 1983, I was introduced to the personal computer. I had just started my second year in the Armed Forces, and one day after payday while wandering around the Post Exchange (PX) on base (the Post Exchanges sell consumer goods and services to authorized military personnel), I came across a store display featuring the new “TI-99/4A” personal computer. It was priced around $350, and I grabbed a box off of the top of the display, and just bought it. When I got the computer home, I proceeded to dive right in and start programming. My subject for the first program I would create? The Tarot! Yes, the very first computer program I wrote was a Tarot card reading application. My grandmother had introduced me to the Tarot when I was a teen, so I a had a pretty good understanding of what this divinatory oracle was about.

My knowledge of how to create a program with the graphics necessary to make it an interactive experience was non-existent, but after reading the documentation, I was able to portray a rudimentary graphic representation of what is referred to as the “Celtic Cross” reading. That was actually the hard part. The easy part was creating the data, or the “meanings” of the cards to be selected at random from the usage of the built-in pseudorandom number generator (PRNG) that I programmed into the Tarot application. After 36 hours of continuous coding, my first program was finished. It was a very poor portrayal of the esoteric fortune-telling card game, but it worked as advertised. I even submitted the program to Texas Instruments for inclusion in their gaming offerings, but naturally, they declined.

Later in the 80s, I would try my hand at creating new algorithms for graphic fractal generation, and I went on to create some simple data encryption programs. At first, I wrote some basic substitution ciphers, and then I returned to using a pseudorandom number generator in the algorithms. But pseudorandomness was not good enough for me – being pseudorandom was not true randomness, and keys generated with PRNG could conceivably be cracked by present technology. I had read of a more secure method of encryption, and decided I’d try my hand at doing a “One-time Pad.” In cryptography, the “One-time Pad” (OTP) is an encryption technique that cannot be cracked, but requires the use of a one-time pre-shared key – the same size, or longer, as the message being sent. In this technique, a plaintext is paired with a random secret key. Then, each bit or character of the plaintext is encrypted by combining it with the corresponding bit or character from the pad using modular addition. If the key is truly random, is at least as long as the plaintext, is never reused in whole or in part, and is kept completely secret, the resulting ciphertext will be impossible to decrypt or break. I would then go on to write the first functional OTP encryption program for the DOS operating system.

In 1989, I got into creating computer bulletin board systems (BBS). A bulletin board system or BBS was a dial-up connected computer server running software that allowed users to connect to the system using a terminal program. Once connected and logged in, the user could perform functions such as uploading and downloading software and data, reading news and bulletins, and exchanging messages with other users through email, public message boards, and sometimes via direct chatting. I ran several BBSes from 1989 to 1994 – the content of them would include all manner of science and technology topics, and several were all about computer programming and hacking. One of the BBSes I ran was referred to in the book, “The Hacker Crackdown” by Bruce Sterling (1992).

The relevance of this brief history of my early involvement in the personal computer movement is only important to the story in that it would later be a catalyst for writing about Y2k.

The Y2k article I wrote in 1998 focused on all the hype surrounding the “The Year 2000,” and how computers and everything else with some kind of digital control system would cease to function. The article got published in an arcane, but well-distributed science newsletter, and the response to it was less than gratifying. Computer experts came out of their digital caves in droves to disparage the dispatch I had meaningfully crafted to calm the public fear – fear that was being inflamed by writers, journalists, and talk radio hosts who had little understanding of basic computer functions and hardware. These disparagements were easily shrugged off as typical of the derision received on many occasions regarding much of the material the journal published. But there was something else. Other publishers, looking for an alternative viewpoint on Y2k, were asking permission to republish the article in their own magazines. And, because I wanted to get my viewpoint out there, I gave these publications carte blanche to do so. The article was re-published in no less than 12 different magazines – many of which would eventually publish a retraction of the article – stating they were misinformed by the writer. Their published retractions would appear in editions of their magazines long before the bell tolled midnight on January first, the year 2000. Apparently, they had received so many negative letters about my article, and many from so-called “credentialed experts” that they all felt it necessary to print a retraction, in most cases stating they were misled by what I wrote, and that my information on Y2k was completely wrong. It turned out it was spot on, but very few listened or believed it.

It was around this time that I discovered late-night talk radio programs – specifically “Coast to Coast” hosted by Art Bell, and “Sightings” hosted by Jeff Rense. These talk shows and their hosts truly embraced the worlds of alternative science, and the guests they interviewed were a direct reflection of late-night talk radio kookiness. Guests such as Richard Hoagland (the “Face on Mars” discoverer), David Oates (Reverse Speech pioneer), Gary North (Y2k Gloom and Doomer), and Ed Dames (Remote Viewing) were regulars on the show, and it was a great source of late-night entertainment. But, something about these shows really started to bug me. Here we were nearing the end of the millennium, and the advertising on commercial breaks was all about surviving the coming apocalypse. Ads for wind-up radios and a year’s supply of food went along perfectly with the doom-and-gloom ideology the guests were offering in their lyrical mantras over the AM airwaves.

If you were a listener in the late ‘90s it was a time of wild conspiracy theories and fabricated prophecies offered to listeners with very few solutions save buying something that they advertised. It was enough of a catalyst to engender a willful response from my distaste for the subject matter, and respond I did.   

Around the same time, I fell in favor with a couple of online miscreants and we would later be dubbed the “Mass Depopulation Trio.” MDT was a loose group of hacker-types that had taken over the alt-fan-art-bell IRC chat room. This internet chat room consisted of fans of Art Bell and a group of characters who absolutely hated him. After looking at what people were saying in the chat room, I rather quickly fell into the latter group. And then, well, I was hooked.

The Mass Depopulation Trio organically grew from the roots of the IRC chatroom, and then they developed a website  – disinfotainment.com. “Disinfotainment” was an internet BBS forum and so much more. MDT started putting together audio mashups of talk radio show host’s dialogs and mixing them with certain sound effects and snippets of songs. Some of the music was actually composed and recorded by real musicians for these so-called “spams.” MDT initially consisted of three pseudonymous characters: “MickeyX”, “Johnny Pate”, and “Dr. HD Slow,” all of whom had a devilish ability on the internet to make a mockery of, and virtually destroy any and all resident kooks who were steadfast champions of the radio show and its host. These frustrated kooks were always threatening to call the FBI on MDT, and I’m sure many of them did so.

While MDT was an “all for one, and one for all” trio, they did a lot of their works independently of one another and became involved in several shenanigans that would later become legend. Of the greatest achievements of MDT, “Mel’s Hole” would win hands down.

Mel’s Hole is, according to an urban legend, an allegedly “bottomless pit” near Ellensburg, Washington. Claims about it were first made on Art Bell’s radio show, Coast to Coast AM, by a guest calling himself “Mel Waters.” Later investigation revealed no such person was listed as residing in that area, and there was no credible evidence that the hole ever existed. From the Wikipedia site on Mel’s Hole:

“The legend of the mythical bottomless hole started on February 21, 1997, when a man identified as Mel Waters appeared as a call-in guest on Coast to Coast AM with Art Bell. Waters claimed that he formerly owned rural property nine miles west of Ellensburg in Kittitas County that contained a mysterious hole. According to Bell’s interviews with Waters, the hole had infinite depth and the ability to restore dead animals to life. Waters claimed to have measured the hole’s depth to be more than 15 miles (24 kilometers) by using fishing line and a weight. According to Waters, the hole’s magical properties prompted US “federal agents” to seize the land and fund his relocation to Australia.

“Waters made guest appearances on Bell’s show in 1997, 2000, and 2002. Rebroadcasts of those appearances have helped create what’s been described as a “modern, rural myth”. The exact location of the hole was unspecified, yet several people claimed to have seen it, such as Gerald R. Osborne, who used the ceremonial name Red Elk, who described himself as an “intertribal medicine man…half-breed Native American / white”, and who told reporters in 2012 he visited the hole many times since 1961 and claimed the US government maintained a top secret base there where “alien activity” occurs. But in 2002, Osborne was unable to find the hole on an expedition of 30 people he was leading.

“Local news reporters who investigated the claims found no public records of anyone named Mel Waters ever residing in, or owning property in Kittitas County. According to State Department of Natural Resources geologist Jack Powell, the hole does not exist and is geologically impossible. A hole of the depth claimed “would collapse into itself under the tremendous pressure and heat from the surrounding strata,” said Powell. Powell said an ordinary old mine shaft on private property was probably the inspiration for the stories and commented that Mel’s Hole had established itself as a legend “based on no evidence at all”.

For the first time, I can tell you that Mel’s hole was actually a complete fabrication created by the members of MDT, with a certain member acting out the part of “Mel” as a guest on the Coast to Coast radio show. In later years, several more “hoaxes” would be fabricated and presented on the show by MDT.

Not one of the listeners or the host of the radio talk-show ever had a clue that many of the stories were completely fabricated by MDT.

The Mass Depopulation Trio virtually disbanded shortly after the Y2K disaster never materialized. Their work was done, and so was the sordid credibility of late-night talk radio kookdom.

Debut of the Undercover Research Podcast

Posted in Uncategorized on March 7, 2024 by Michael Theroux

Prestidigitations at an Exposition

Posted in Uncategorized on April 19, 2023 by Michael Theroux

New album coming soon…

Golden Mean Audio – F1

Posted in Uncategorized on March 6, 2023 by Michael Theroux

Coming Soon…

THE SINGULARITY DATE: The Future of Humanity and its Assimilation by Artificial Intelligence 

Posted in Uncategorized on January 6, 2023 by Michael Theroux

THE SINGULARITY DATE:
The Future of Humanity and its Assimilation by Artificial Intelligence 


by Michael Theroux

INTRODUCTION

The Singularity is a hypothetical future event in which technological progress accelerates to the point where humanity experiences exponential change. It is most associated with artificial intelligence (AI) surpassing human intelligence and leading to unforeseen and potentially dramatic changes in society. Some people believe that this could lead to a positive “technological singularity,” in which humanity’s intelligence and capabilities are greatly enhanced by AI, while others are concerned about the potential negative consequences of such a development, such as the loss of jobs and the potential for AI to surpass and potentially threaten humanity.

RAY KURZWEIL

Ray Kurzweil is an American inventor and futurist. He is known for his predictions about artificial intelligence and the future of technology, as well as his work in the fields of optical character recognition, text-to-speech synthesis. Kurzweil has written several books on his predictions for the future, including “The Singularity is Near” and “The Age of Spiritual Machines.” He has also received numerous awards and honors for his work, including the National Medal of Technology and Innovation in 1999 and the Lemelson-MIT Prize in 2002.

Ray Kurzweil has also made a number of contributions to the field of music technology. One of his earliest inventions was the Kurzweil K250, a synthesizer that was released in 1984 and was capable of reproducing the sound of a grand piano with remarkable realism (I have worked on several of these instruments, and they are a delight to repair as they’ve been well engineered).

Kurzweil has made predictions about when the singularity might occur based on exponential increases in technological progress. Kurzweil predicts that the singularity will occur in 2045, based on his analysis of the rate of progress in fields such as artificial intelligence, biotechnology, and nanotechnology. 

There is much debate among experts about when the singularity might occur, if it will happen at all. Some believe it is already happening, and it seems rather obvious that it is – our reliance on search engine information is already inseparable from the biological processes of our brains.

SINGULARITY SKEPTICS

Others are more skeptical about the idea of the singularity and the predictions made about it. Some argue that technological progress is not always exponential and that there are often unforeseen setbacks and barriers to progress. Others have raised concerns about the potential negative consequences of artificial intelligence surpassing human intelligence, such as the potential loss of jobs or the possibility of AI being used for malicious purposes.

There are also many unknowns about what the singularity might actually look like and what its consequences might be. Some people believe that it could lead to the creation of superintelligent AI that is capable of solving complex problems and achieving goals that are beyond the capabilities of humans. Others believe that it could lead to the creation of AI that is capable of surpassing and potentially threatening humanity, leading to a dystopian future in which humans are subservient to machines.

Overall, the concept of the singularity is a highly speculative and controversial one, and it is difficult to make definitive predictions about when or “if” it will occur. Regardless of when it happens, it is clear that advances in technology will continue to shape our society in significant ways, and it will be important for us to carefully consider the potential consequences and ethical implications of these developments. 

POSITIVE IMPLICATIONS

There are many potential positive implications of the singularity, including:

  1. Increased efficiency and productivity: Advanced artificial intelligence and automation could potentially take over many tasks that are currently performed by humans, freeing up time for people to focus on more creative and meaningful work.
  1. Enhanced communication and collaboration: Advanced technology could facilitate more effective communication and collaboration among people, breaking down barriers of language, culture, and distance.
  1. Improved healthcare: Advanced technology could lead to significant advances in healthcare, such as the development of new treatments and therapies that are more effective and less invasive than current options.
  1. Increased quality of life: The singularity could bring about significant improvements in the quality of life for many people, including longer lifespans, reduced poverty, and increased access to education and opportunities.
  1. Solving global challenges: The singularity could also help humanity to tackle some of the most pressing global challenges of our time, such as how to discern whether or not climate change is happening due to human activities or natural processes of the planet, food and water insecurities, and viral epidemics.

NEGATIVE IMPLICATIONS

While the singularity has the potential to bring about many positive changes, it also carries with it the risk of negative consequences. Some potential negative consequences of the singularity include:

  1. Unemployment: The increasing automation of tasks could potentially lead to widespread unemployment, as machines take over jobs that are currently performed by humans.
  1. Inequality: The benefits of the singularity may not be evenly distributed, leading to increased inequality between those who understand and have access to advanced technologies and those who do not (program or be programmed!).
  1. Security risks: The development of advanced artificial intelligence could potentially pose security risks, as it could be used to hack into computer systems, gather sensitive information, or even engage in acts of cyber warfare (already going on).
  1. Loss of privacy: The proliferation of advanced technologies could also lead to the erosion of privacy, as it becomes easier for governments and corporations to track and monitor individuals (again, already going on).
  1. Ethical concerns: The development of advanced artificial intelligence raises a number of ethical concerns, such as the potential for the mistreatment of intelligent machines and the ethical implications of creating a being that may surpass human intelligence.

WILL THE SINGULARITY ACTUALLY HAPPEN?

I doubt we will ever see the Singularity occur – in anyone’s lifetime. At this stage in the progress of AI, there is no stopping it, and every major country on the planet is scrambling to bring the Singularity to fruition. Unfortunately, there will be some that will deem it necessary to destroy it before it happens. That means the destruction of humanity. In order for the furtherance of this incarnation of humanity, we need AI. We need to become the proverbial “Borg” – to allow the assimilation. We are already doing it whether or not we are actually conscious of it. It is simply not logical that we’ll ever be able to sustain the current evolution of the species without an erudite intervention such as AI. But, it is doubtful that this solution will ever be embraced. 

REFERENCES

“Program or Be Programmed: Ten Commands for a Digital Age” by Douglas Rushkoff
https://www.amazon.com/Program-Be-Programmed-Commands-Digital/dp/159376426X

“The Singularity Is Near: When Humans Transcend Biology” by Ray Kurzweil
https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0670033847/

“The Age of Spiritual Machines: When Computers Exceed Human Intelligence” by Ray Kurzweil
https://www.amazon.com/Age-Spiritual-Machines-Computers-Intelligence/dp/0140282025/

“The Computer and the Incarnation of Ahriman” by David Black
https://rudolfsteinerbookstore.com/product/the-computer-and-the-incarnation-of-ahriman/

The Singularity Date
https://singularitydate.godaddysites.com/

OpenAI: A research organization that aims to promote and develop friendly artificial intelligence in the hope of benefiting humanity as a whole.
https://openai.com/