Prestidigitations at an Exposition

Posted in Uncategorized on April 19, 2023 by Michael Theroux

New album coming soon…


Golden Mean Audio – F1

Posted in Uncategorized on March 6, 2023 by Michael Theroux

Coming Soon…

The Russian Hacking Diatribe, and Why It Is Complete Agitprop Nonsense

Posted in Uncategorized on January 10, 2023 by Michael Theroux

Originally published in 2600 Magazine, October 2017 – and relevant today in light of new evidence.

The Russian Hacking Diatribe, and Why It Is Complete Agitprop Nonsense

By Michael Theroux

There is a necessity for large corporate interests controlling the government to create agitation once again with Russia and other enemy states in order to gain the support of the people to funnel massive funds to the Military Industrial Complex. It’s a plausible tactic where the politicians of this country are sponsored by giant defense corporations. If they’re pulling out of active wars, but in desperate need to keep fueling the military-industrial complex that signs their paychecks, they could cleverly revive the Cold War game plan. And, they have.

Recent and past “news” delivered by the MSM – who has wholly embraced the intelligentsia’s claims offered up by the CIA, and now other 3-letter agencies – that a Russian state-sponsored hack of the DNC and the RNC – had an effect in swaying the US’s election results, is patently absurd, and pure agitprop. To date, there is absolutely no conclusive evidence that anything of the sort occurred. The Straw Man tactic has been employed again, and it appears to be working as usual.

The only reason to continually create new bad guys, or conjure up the old bad guys is to fill the coffers of corporate Department of Defense contractors who lobby the shit out of our government. THEY DON’T WORK FOR US. Our so-called government officials work for the money they get from corporate interests. And, they need those paychecks to keep coming in.

Now, I could go into the sexy details of what it takes to track down a real state hacker (most of what the official rhetoric has to offer is juvenile and pedantic), but it’s pointless when you realize this has nothing to do with hacking. There is a bigger picture here people, and it’s emblazoned with a scarlet letter sewn into the very fabric of our willful unconsciousness. We need to wake up, and not accept this bullshit any longer.

Breakdown of the “So-called” Evidence for Russian Hacking, and the Sad State of Cybersecurity

Was there definitive evidence contained in the JAR (Joint Analysis Report – GRIZZLY STEPPE – Russian Malicious Cyber Activity),  or FireEye’s analysis, “APT28: A WINDOW INTO RUSSIA’S CYBER ESPIONAGE OPERATIONS” that Russian state-sponsored hackers compromised the DNC server with malware, and then leaked any acquired documents to WikiLeaks? Absolutely not. And here’s why:

Let’s first run through the “so-called” evidence – basically two “smoking guns” in the analysis – and a few other questions pertinent to the investigation. I’ll address each point with some technical details and maybe a little common-sense evaluation:

Certain malware settings suggest that the authors did the majority of their work in a Russian language build environment. The malware compile times corresponded to normal business hours in the UTC + 4-time zone, which includes major Russian cities such as Moscow and St. Petersburg. Ultimately, WikiLeaks was the source of the dissemination of the compromised data – where did they acquire it? According to media sources, all 17 US intelligence agencies confirmed that Russian state-sponsored hackers were the source of the attacks. Was this “so-called” hack designed to affect the outcome of the US election?

Let us now address each of these points specifically (some of this may be more technical for the average human – Program or be Programmed):

1. Certain malware settings suggest that the authors did the majority of their work in a Russian language build environment.

APT28 (Advanced Persistent Threat 28) consistently compiled Russian language settings into their malware.

Locale ID                        Primary language Country        Samples

0x0419                            Russian (ru)                       59

0x0409                            English (us)                       27         

0x0000 or 0x0800             Neutral locale                     16       

0x0809                            English (uk)                       1        

By no means is this evidence of anything. It could even be a US-sponsored hack, for that matter, obfuscating its origin by using a Russian build environment. This is pure speculation, and any security researcher knows this has effectively been used by malware authors in the past.

2. The malware compile times corresponded to normal business hours in the UTC + 4-time zone, which includes major Russian cities such as Moscow and St. Petersburg.

The FireEye report states:

“During our research into APT28’s malware, we noted two details consistent across malware samples. The first was that APT28 had consistently compiled Russian language settings into their malware. The second was that malware compile times from 2007 to 2014 corresponded to normal business hours in the UTC + 4 time zone, which includes major Russian cities such as Moscow and St. Petersburg. Use of Russian and English Language Settings in PE Resources include language information that can be helpful if a developer wants to show user interface items in a specific language. Non-default language settings packaged with PE resources are dependent on the developer’s build environment. Each PE resource includes a “locale” identifier with a language ID composed of a primary language identifier indicating the language and a sublanguage identifier indicating the country/region.”

Any malware author could intentionally leave behind false clues in the resources section, pointing to Russia or any other country. These signatures are very easy to manipulate, and anyone with a modicum of Googling skills can alter the language identifier of the resources in PE files. ANY state-sponsored entity could easily obfuscate the language identifier in this way. One could also use online compilers or such an online integrated development environment (IDE) through a proxy service to alter times – indicating that compile times were from any specific region chosen. The information in the FireEye report is spurious at best.

3. Ultimately, WikiLeaks was the source of the dissemination of the compromised data – where did they acquire it? 

Julian Assange, the founder of WikiLeaks, has repeatedly stated that the source of the information they posted was NOT from ANY state-sponsored source – including Russia. In fact, in all of the reports (including the JAR and FireEye) they never once mention WikiLeaks. Strange.

4. According to media sources, all 17 US intelligence agencies confirmed Russian state-sponsored hackers were the source of the attacks.

This is hilarious – many of these 17 agencies wouldn’t know a hack from a leak nor would they have been privy to any real data other than what a couple other agencies reported which was thin and barely circumstantial, and was wholly derived from a third-party security analysis:

  1. Air Force Intelligence
  2. Army Intelligence
  3. Central Intelligence Agency
  4. Coast Guard Intelligence
  5. Defense Intelligence Agency
  6. Department of Energy
  7. Department of Homeland Security
  8. Department of State
  9. Department of the Treasury
  10. Drug Enforcement Administration
  11. Federal Bureau of Investigation
  12. Marine Corps Intelligence
  13. National Geospatial-Intelligence Agency
  14. National Reconnaissance Office
  15. National Security Agency
  16. Navy Intelligence
  17. Office of the Director of National Intelligence

5. Was this “so-called” hack designed to affect the outcome of the US election?

It is clear, even if there were state-sponsored hacks, that the information provided in WikiLeaks had no relation to Russian manipulation of US elections. The information speaks for itself – it is the content of the leaks that is relevant – and it matters not where it came from. DNC corruption is the real issue, and any propaganda agenda designed to direct attention away from the damage the info presents is wholly deflection.

Most of the references used in the JAR report are really from third-party cybersecurity firms looking to “show off” their prowess at rooting out a hacker culprit. This ultimately means money for them. This is the reality of the sad state of security today. Note that not one report mentions that every single one of the compromises was directed at Microsoft operating systems. Why, when everyone knows that Microsoft is the most insecure OS and is specifically targeted by malware authors, state-sponsored or otherwise, do any governments still use it? Fortunately, there are real security researchers out there who see through the smoke and mirrors and aren’t buying the BS handed them by government entities and the media outlets they control.

The Anti-Forensic Marble Framework

With the release of the “Marble Framework” on WikiLeaks, we come upon more evidence that the entire so-called “Russian Hacking” story could very well have been a US state-sponsored hack – and it’s more likely.

From WikiLeaks:

“Marble is used to hamper forensic investigators and anti-virus companies from attributing viruses, trojans and hacking attacks to the CIA. Marble does this by hiding (“obfuscating”) text fragments used in CIA malware from visual inspection. This is the digital equivalent of a specialized CIA tool to place covers over the English language text on U.S. produced weapons systems before giving them to insurgents secretly backed by the CIA.”

CIA Leaks

I’ve been through many of the docs included in Vault 7 and it isn’t anything at all new or revelatory. I called this back in 2005 and detailed much of it back then. Most thought me a kook. Much of what I’ve looked at so far is valid, although it’s very basic info any teenage hacker attending DEFCON would know about.

It’s old crap, and I’d put money on it that the CIA itself “leaked” the data.

And finally, the most recent stories of Russian attempts to hack into U.S. voting systems are even more ridiculous in their claims, and were based exclusively on info from the Department of Homeland Security. Apparently, 21 states as cited by the MSM (in last year’s presidential election) were targeted by “Russian” hackers.  These claims about Russian hacking get ineptly hyped by media outlets, and are almost always based on nothing more than fact-free claims from government officials, only to look completely absurd under even minimal scrutiny by real security experts, because they are entirely lacking in any real evidence.

“In our age there is no such thing as ‘keeping out of politics.’ All issues are political issues, and politics itself is a mass of lies, evasions, folly, hatred and schizophrenia.” 

~ George Orwell

For complete information, please check out the links cited as references below:

THE SINGULARITY DATE: The Future of Humanity and its Assimilation by Artificial Intelligence 

Posted in Uncategorized on January 6, 2023 by Michael Theroux

The Future of Humanity and its Assimilation by Artificial Intelligence 

by Michael Theroux


The Singularity is a hypothetical future event in which technological progress accelerates to the point where humanity experiences exponential change. It is most associated with artificial intelligence (AI) surpassing human intelligence and leading to unforeseen and potentially dramatic changes in society. Some people believe that this could lead to a positive “technological singularity,” in which humanity’s intelligence and capabilities are greatly enhanced by AI, while others are concerned about the potential negative consequences of such a development, such as the loss of jobs and the potential for AI to surpass and potentially threaten humanity.


Ray Kurzweil is an American inventor and futurist. He is known for his predictions about artificial intelligence and the future of technology, as well as his work in the fields of optical character recognition, text-to-speech synthesis. Kurzweil has written several books on his predictions for the future, including “The Singularity is Near” and “The Age of Spiritual Machines.” He has also received numerous awards and honors for his work, including the National Medal of Technology and Innovation in 1999 and the Lemelson-MIT Prize in 2002.

Ray Kurzweil has also made a number of contributions to the field of music technology. One of his earliest inventions was the Kurzweil K250, a synthesizer that was released in 1984 and was capable of reproducing the sound of a grand piano with remarkable realism (I have worked on several of these instruments, and they are a delight to repair as they’ve been well engineered).

Kurzweil has made predictions about when the singularity might occur based on exponential increases in technological progress. Kurzweil predicts that the singularity will occur in 2045, based on his analysis of the rate of progress in fields such as artificial intelligence, biotechnology, and nanotechnology. 

There is much debate among experts about when the singularity might occur, if it will happen at all. Some believe it is already happening, and it seems rather obvious that it is – our reliance on search engine information is already inseparable from the biological processes of our brains.


Others are more skeptical about the idea of the singularity and the predictions made about it. Some argue that technological progress is not always exponential and that there are often unforeseen setbacks and barriers to progress. Others have raised concerns about the potential negative consequences of artificial intelligence surpassing human intelligence, such as the potential loss of jobs or the possibility of AI being used for malicious purposes.

There are also many unknowns about what the singularity might actually look like and what its consequences might be. Some people believe that it could lead to the creation of superintelligent AI that is capable of solving complex problems and achieving goals that are beyond the capabilities of humans. Others believe that it could lead to the creation of AI that is capable of surpassing and potentially threatening humanity, leading to a dystopian future in which humans are subservient to machines.

Overall, the concept of the singularity is a highly speculative and controversial one, and it is difficult to make definitive predictions about when or “if” it will occur. Regardless of when it happens, it is clear that advances in technology will continue to shape our society in significant ways, and it will be important for us to carefully consider the potential consequences and ethical implications of these developments. 


There are many potential positive implications of the singularity, including:

  1. Increased efficiency and productivity: Advanced artificial intelligence and automation could potentially take over many tasks that are currently performed by humans, freeing up time for people to focus on more creative and meaningful work.
  1. Enhanced communication and collaboration: Advanced technology could facilitate more effective communication and collaboration among people, breaking down barriers of language, culture, and distance.
  1. Improved healthcare: Advanced technology could lead to significant advances in healthcare, such as the development of new treatments and therapies that are more effective and less invasive than current options.
  1. Increased quality of life: The singularity could bring about significant improvements in the quality of life for many people, including longer lifespans, reduced poverty, and increased access to education and opportunities.
  1. Solving global challenges: The singularity could also help humanity to tackle some of the most pressing global challenges of our time, such as how to discern whether or not climate change is happening due to human activities or natural processes of the planet, food and water insecurities, and viral epidemics.


While the singularity has the potential to bring about many positive changes, it also carries with it the risk of negative consequences. Some potential negative consequences of the singularity include:

  1. Unemployment: The increasing automation of tasks could potentially lead to widespread unemployment, as machines take over jobs that are currently performed by humans.
  1. Inequality: The benefits of the singularity may not be evenly distributed, leading to increased inequality between those who understand and have access to advanced technologies and those who do not (program or be programmed!).
  1. Security risks: The development of advanced artificial intelligence could potentially pose security risks, as it could be used to hack into computer systems, gather sensitive information, or even engage in acts of cyber warfare (already going on).
  1. Loss of privacy: The proliferation of advanced technologies could also lead to the erosion of privacy, as it becomes easier for governments and corporations to track and monitor individuals (again, already going on).
  1. Ethical concerns: The development of advanced artificial intelligence raises a number of ethical concerns, such as the potential for the mistreatment of intelligent machines and the ethical implications of creating a being that may surpass human intelligence.


I doubt we will ever see the Singularity occur – in anyone’s lifetime. At this stage in the progress of AI, there is no stopping it, and every major country on the planet is scrambling to bring the Singularity to fruition. Unfortunately, there will be some that will deem it necessary to destroy it before it happens. That means the destruction of humanity. In order for the furtherance of this incarnation of humanity, we need AI. We need to become the proverbial “Borg” – to allow the assimilation. We are already doing it whether or not we are actually conscious of it. It is simply not logical that we’ll ever be able to sustain the current evolution of the species without an erudite intervention such as AI. But, it is doubtful that this solution will ever be embraced. 


“Program or Be Programmed: Ten Commands for a Digital Age” by Douglas Rushkoff

“The Singularity Is Near: When Humans Transcend Biology” by Ray Kurzweil

“The Age of Spiritual Machines: When Computers Exceed Human Intelligence” by Ray Kurzweil

“The Computer and the Incarnation of Ahriman” by David Black

The Singularity Date

OpenAI: A research organization that aims to promote and develop friendly artificial intelligence in the hope of benefiting humanity as a whole.

Simulated Life – A Concise History of Thought on the Possibility that We are Living in a Simulation

Posted in Uncategorized on January 3, 2023 by Michael Theroux

Simulated Life – A Concise History of Thought on the Possibility that We are Living in a Simulation

by Michael Theroux


The concept of simulation theory proposes that our reality is not real, but rather a computer-generated simulation created by a highly advanced civilization. This theory has gained significant attention in recent years, with proponents arguing that it explains various mysteries and anomalies in our world. However, the theory remains controversial and is met with skepticism by many in the scientific community.

In this research paper, we will explore the history and origins of simulation theory, examine the evidence that supports the theory, and consider the implications of the theory for our understanding of reality. We will also examine the criticisms of simulation theory and consider whether it is a viable explanation for the nature of our reality.

History and Origins of Simulation Theory:

The idea that we may be living in a simulated reality can be traced back to ancient philosophers like Plato, who proposed the concept of the “Allegory of the Cave.” 

In the 20th century, philosopher Nick Bostrom proposed the “Simulation Argument,” which suggests that it is highly likely that we are living in a simulated reality. Bostrom’s argument is based on the idea that if advanced civilizations reach a point where they are able to create highly realistic simulations of their ancestors, it is likely that they would do so. Therefore, the probability of us living in a simulated reality is high if we assume that there are advanced civilizations in the universe.

Evidence for Simulation Theory:

There are several pieces of evidence that have been used to support the idea of simulation theory. One of the most commonly cited pieces of evidence is the concept of the “Mandela Effect.” (See examples)

Another piece of evidence cited by proponents of simulation theory is the concept of quantum mechanics and the idea that reality is not fixed and can be influenced by our observations and actions. This suggests that the reality we experience may not be the true reality, but rather a constructed one that is influenced by our perceptions.

Here are 10 examples in history regarding the idea that we are living in a simulation:

  1. Plato’s “Allegory of the Cave” – In this allegory, Plato suggests that the world we see and experience is just a shadow of the true reality.
  1. Descartes’ “Cogito, ergo sum” – In his philosophical work, Descartes proposed the idea that the only thing we can be certain of is our own consciousness, leading to the possibility that everything else is an illusion.
  1. The “Brain in a Vat” thought experiment – This thought experiment, proposed by philosophers Hilary Putnam and Saul Kripke, suggests that we could be brains in a vat being fed illusions of a reality.
  1. Nick Bostrom’s “Simulation Argument” – Bostrom’s argument suggests that it is highly likely that we are living in a simulated reality if we assume that there are advanced civilizations in the universe that can create realistic simulations of their ancestors.
  1. The “Mandela Effect” – This phenomenon, in which large groups of people remember events or details differently than they actually occurred, could be explained by the idea that our memories are being altered by the simulation.
  1. The concept of quantum mechanics – The idea that reality is not fixed and can be influenced by our observations and actions suggests that the reality we experience may not be the true reality, but rather a constructed one influenced by our perceptions.
  1. The “Matrix” movies – These movies explore the concept of a simulated reality in which humans are unknowingly living in a computer-generated world.
  1. The “Westworld” TV show – This show centers on a theme park where visitors can interact with robots in a simulated Wild West setting, leading to the question of whether the robots’ experiences and emotions are real or just programmed responses.
  1. The “Ready Player One” novel and movie – This story explores the concept of a virtual reality world in which people can escape their mundane lives and live out their wildest dreams.
  1. The concept of virtual reality – The development of virtual reality technology has led to the question of whether it is possible to create a simulated reality that is indistinguishable from the real world.

Implications of Simulation Theory:

If we are indeed living in a simulated reality, what does this mean for our understanding of the world and our place in it? One of the most significant implications of simulation theory is that it challenges our understanding of free will and determinism. If we are just characters in a program, are our actions and choices predetermined or do we have the ability to make our own choices and determine our own path?

Simulation theory also raises questions about the nature of consciousness and whether it is something that can be simulated. If we are just characters in a program, does that mean that our experiences and emotions are not real?

Criticisms of Simulation Theory:

Simulation theory is met with skepticism by many in the scientific community, who argue that there is not sufficient evidence to support the idea that we are living in a simulated reality. Some critics argue that the theory relies on assumptions about the capabilities of advanced civilizations and is not based on empirical evidence.

Additionally, simulation theory does not offer a satisfactory explanation for how a simulated reality could be created or how it could be sustained. It is unclear how a simulation of such complexity could be created and maintained, and there is no evidence to suggest that it is possible.


  1. ChatGPT – Yes, it already knows. ;-)
  2. My research – Yes, I wrote much of this article.

Superluminal Biological Communications

Posted in Uncategorized on December 31, 2022 by Michael Theroux

Superluminal Biological Communications

By Michael Theroux

Many years ago, I wrote a book called “Biological Communications (see references below). The focus of the book centered on the work of L. George Lawrence in the 1960s, and his research on the potential superluminal communications of biological organisms. With a little help from my AI friends and ChatGPT, we can now sort out a bit of this research.

Biological communication refers to the process by which living organisms transmit information to one another through various means, such as chemical signals, sound, or visual signals. These forms of communication can be essential for survival and reproduction, as they allow organisms to coordinate their behaviors and exchange information about their environment.

One interesting area of study within the field of biological communication is the use of superluminal (or faster-than-light) transmission by some organisms. Superluminal transmission refers to the ability to transmit information at speeds that exceed the speed of light, which is considered the maximum speed at which information can travel according to the laws of physics.

There are several examples of superluminal transmission in the natural world, although the mechanisms by which these phenomena occur are not yet fully understood. One well-known example is the process of quorum sensing, which is used by some bacteria to communicate and coordinate their behaviors. Quorum sensing involves the release of chemical signals called autoinducers, which can be detected by other bacteria and trigger a response. Some studies have suggested that quorum sensing may occur at speeds that are faster than the speed of light, although these claims are controversial and have not been widely accepted.

Other examples of superluminal transmission in nature include the ability of some animals to communicate using ultrasound, which is sound waves at frequencies higher than the range of human hearing. Some bats, for example, use ultrasound to navigate and locate prey, and some whales and dolphins use it for communication and echolocation. The mechanisms by which these animals are able to produce and detect ultrasound are not fully understood, and it is possible that they may involve some form of superluminal transmission.

There is also some evidence that plants may be able to communicate using methods that involve superluminal transmission. For example, some studies have suggested that plants may be able to sense the presence of other plants and respond to their needs through the release of chemical signals. The mechanisms by which these signals are transmitted and detected are not well understood, and further research is needed to confirm the existence and nature of these phenomena.

In conclusion, superluminal transmission is a fascinating and poorly understood aspect of biological communication that has the potential to shed light on the ways in which living organisms interact and communicate with one another. Further research is needed to better understand the mechanisms by which superluminal transmission occurs and the ways in which it is used by different organisms.


Biological Communications – Selected Articles, Experiments, and Patent Designs.

Using AI to Compare Historical SOHO Satellite Data with Historical Terrestrial Earthquake Data for Prediction Analysis

Posted in Uncategorized on December 27, 2022 by Michael Theroux

Using AI to Compare Historical SOHO Satellite Data with Historical Terrestrial Earthquake Data for Prediction Analysis

By Michael Theroux


Predicting earthquakes can be a complex task that requires a thorough understanding of the underlying geophysical processes that lead to earthquakes and the development of sophisticated mathematical models that can accurately represent these processes. While it is possible that certain types of solar activity, such as solar flares or coronal mass ejections, could have some influence on the Earth’s geomagnetic field and potentially contribute to the occurrence of earthquakes, we should state that solar data alone may not be sufficient to accurately predict earthquakes.

In order to predict earthquakes, we would typically rely on a combination of data from various sources, including geological, geophysical, and geodetic data, as well as data from seismographic and geodetic networks. These data are used to develop models that can identify patterns and trends in the occurrence of earthquakes and use these patterns to make predictions about future earthquakes.

One approach that has been used to predict earthquakes is the application of machine learning techniques, such as neural networks, to large datasets of seismographic and geodetic data. These techniques can help to identify patterns and correlations in the data that may be indicative of impending earthquakes, and can be used to make predictions about the likelihood and magnitude of future earthquakes.

Overall, while it is certainly possible to use SOHO solar data in conjunction with other types of data to try to predict earthquakes, it is probable that this approach may be insufficient on its own to accurately predict earthquakes. Instead, a more comprehensive and multidisciplinary approach, incorporating data from a wide range of sources and utilizing advanced modeling techniques, is likely to be more effective in accurately predicting earthquakes.


The SOHO satellite (short for “Solar and Heliospheric Observatory”) is a spacecraft that was launched by NASA and the European Space Agency (ESA) in 1995 to study the sun and its effects on the solar system. The SOHO satellite collects a wide variety of data on the sun and solar system, including images, spectra, and other types of data that are used to understand the sun’s behavior and the impacts it has on the earth.

Some of the specific types of data that the SOHO satellite collects include:

Images of the sun: The SOHO satellite has several instruments that are used to capture images of the sun, including the Extreme ultraviolet Imaging Telescope (EIT) and the Large Angle and Spectrometric Coronagraph (LASCO). These instruments capture images of the sun’s surface and atmosphere, allowing us to study the sun’s features, such as sunspots and solar flares. The EIT instrument captures images of the sun in four different wavelengths of extreme ultraviolet light, while the LASCO instrument captures images of the sun’s corona (the outer atmosphere of the sun). These images are used to study the sun’s magnetic field, solar winds, and other features that are important for understanding its behavior.

Spectra of the sun: The SOHO satellite has several instruments that are used to analyze the sun’s light and measure its composition, including the Solar Ultraviolet Measurements of Emitted Radiation (SUMER) and the Coronal Diagnostic Spectrometer (CDS). These instruments capture spectra of the sun’s light, allowing us to study the sun’s chemical composition and understand how it produces and releases energy. The SUMER instrument captures spectra of the sun’s ultraviolet light, while the CDS instrument captures spectra of the sun’s visible and ultraviolet light. These spectra are used to study the sun’s temperature, density, and other characteristics that are important for understanding its behavior.

Data on solar wind and solar particles: The SOHO satellite has several instruments that are used to measure the flow of solar wind and solar particles from the sun, including the Solar Wind Anisotropies (SWAN) and the Solar and Heliospheric Observatory/Charge, Element, and Isotope Analysis System (CELIAS/MTOF). These instruments measure the velocity, density, and composition of the solar wind and solar particles, allowing us to study how the sun affects the rest of the solar system. The SWAN instrument measures the solar wind by detecting the hydrogen atoms that are present in it, while the CELIAS/MTOF instrument measures the solar particles by analyzing their charge, element, and isotope composition. These data are used to study the sun’s magnetic field, solar winds, and other features that are important for understanding its behavior.

Overall, the SOHO satellite collects a wide variety of data on the sun and solar system, providing us with a wealth of information that is used to understand the sun’s behavior and the impacts it has on the earth. This data is used to study the sun’s magnetic field, solar winds, and other features that are important for understanding its behavior, as well as to understand how the sun affects the rest of the solar system.


TensorFlow is an open-source software library for machine learning and artificial intelligence. It was developed by Google and is widely used in industry, academia, and research to build and deploy machine learning models.

At its core, TensorFlow is a library for performing numerical computation using data flow graphs. A data flow graph is a directed graph in which nodes represent mathematical operations and edges represent the flow of data between those operations. TensorFlow uses this data flow graph to perform machine learning tasks, such as training and evaluating machine learning models.

TensorFlow allows users to define and execute data flow graphs using a high-level API, making it easy to build and deploy machine learning models. It also includes a number of tools and libraries for tasks such as data preprocessing, visualization, and optimization, making it a comprehensive toolkit for machine learning and artificial intelligence.

One of the main strengths of TensorFlow is its ability to run on a variety of platforms, including CPUs, GPUs, and TPUs (tensor processing units). This allows users to easily scale their machine-learning models to handle large amounts of data and perform complex tasks.

TensorFlow is widely used in a variety of applications, including image and speech recognition, natural language processing, and machine translation. It is also used in many research projects, making it a popular choice for machine learning and artificial intelligence research.

Using TensorFlow to compare historical SOHO satellite data with historical earthquake data can be a powerful way to uncover insights and patterns that may not be immediately obvious. The following will explore the various ways in which TensorFlow can be used to compare these two types of data and identify any potential relationships between them.

To begin, we need to gather and prepare the data for analysis. This will involve obtaining the SOHO satellite data and earthquake data, and then formatting it in a way that is suitable for use with TensorFlow.

There are several sources of SOHO satellite data that can be used for this purpose, including NASA’s SOHO website and the European Space Agency’s SOHO data archive. These sources provide a wealth of data on the sun and solar system, including images, spectra, and other types of data that can be used to understand the sun’s behavior and the impacts it has on the earth.

Similarly, there are several sources of earthquake data that can be used for this analysis, including the US Geological Survey’s earthquake database and the Global Earthquake Model’s OpenQuake engine. These sources provide data on earthquakes around the world, including information on the location, magnitude, and other characteristics of each earthquake.

Once we have obtained the SOHO satellite data and earthquake data, we need to format it in a way that is suitable for use with TensorFlow. This may involve cleaning and preprocessing the data, as well as selecting specific subsets of the data to use for the analysis. It is also necessary to extract features from the data that can be used to identify patterns and correlations between the two datasets.

Now that we have the data prepared and ready for analysis, we use TensorFlow to build a model that compares the two datasets and identifies any correlations or patterns between them. This involves using a variety of techniques, such as deep learning, machine learning, or statistical analysis, depending on the specific goals of the analysis and the characteristics of the data.

We can use TensorFlow’s deep learning capabilities to build a neural network that takes the SOHO satellite data and earthquake data as input and outputs a prediction of the likelihood of an earthquake occurring. By training the model on a large dataset of historical data, we fine-tune the model to accurately predict earthquakes based on the SOHO satellite data.

We can also use TensorFlow’s machine learning algorithms to identify patterns in the data and identify any potential correlations between the SOHO satellite data and earthquake data. This involves using techniques such as clustering, classification, or regression to analyze the data and identify any trends or patterns.

The key to using TensorFlow to compare historical SOHO satellite data with historical earthquake data is to carefully select and prepare the data, and then use the appropriate techniques to analyze and interpret the results. With the right tools and techniques, it is possible to uncover valuable insights and patterns that can help us better understand the relationships between these two types of data.

The following code will first load the SOHO solar data and the earthquake data using the tf.keras.utils.get_file function, which downloads the data from the specified URLs and saves it locally. It then uses the pd.merge function from the Pandas library to merge the two datasets on the ‘timestamp’ column. Finally, it uses a for loop to compare the ‘solar_activity’ and ‘magnitude’ columns from the two datasets by creating scatter plots using Matplotlib’s plt.scatter function.

import tensorflow as tf

# Load the SOHO solar data

SOHO_solar_data = tf.keras.utils.get_file(

    ‘SOHO_solar_data.csv’, ‘‘)

SOHO_solar_data = pd.read_csv(SOHO_solar_data)

# Load the earthquake data

earthquake_data = tf.keras.utils.get_file(

    ‘earthquake_data.csv’, ‘‘)

earthquake_data = pd.read_csv(earthquake_data)

# Merge the two datasets on the ‘timestamp’ column

merged_data = pd.merge(SOHO_solar_data, earthquake_data, on=’timestamp’)

# Compare the two datasets

comparison_columns = [‘solar_activity’, ‘magnitude’]

for column in comparison_columns:

    plt.scatter(merged_data[column + ‘_x’], merged_data[column + ‘_y’])

    plt.xlabel(column + ‘ (SOHO solar data)’)

    plt.ylabel(column + ‘ (earthquake data)’)

If we want to use TensorFlow to graph a function, we can use the tf.function decorator to define a TensorFlow function that represents the mathematical function we want to graph. We can then use TensorFlow’s math operations and functions to define the calculations needed to evaluate the function, and use TensorFlow’s tf.Session or tf.keras.backend.eval to execute the function and generate the output.

Here is an example of how we can use TensorFlow to graph the function y = x^2 + 2x + 1:

import tensorflow as tf

import numpy as np

import matplotlib.pyplot as plt

# Define the function using the @tf.function decorator


def quadratic_function(x):

  return tf.add(tf.add(tf.square(x), tf.multiply(x, 2)), 1)

# Generate the input values

x = np.linspace(-10, 10, 100)

# Evaluate the function

y = quadratic_function(x)

# Plot the function

plt.plot(x, y)




The data sets of SOHO data and earthquake data need to be quite large to actually perform a prediction using tensorflow: the size of these datasets is important.

The size of the data sets needed to perform a prediction using TensorFlow will depend on a number of factors, including the complexity of the model we are using, the amount of data available for training and testing, and the quality of the data. In general, the larger the data sets and the more diverse and representative they are, the better the model will be able to generalize and make accurate predictions.

For example, if we are using a simple linear model to predict earthquakes based on solar activity, we may be able to achieve good results with a relatively small data set. However, if we are using a more complex model, such as a deep neural network, we may need a larger data set in order to achieve good results.

It is generally recommended to start with a large enough data set that we can split it into training, validation, and test sets, and to use cross-validation techniques to evaluate the performance of our model. This will help us to determine how well our model is able to generalize to new data and identify any issues that may need to be addressed.


Artificial intelligence (AI) has the potential to play a significant role in the development of predicting earthquakes using data from the Solar and Heliospheric Observatory (SOHO) and other sources.

One way that AI can be used in this context is through the application of machine learning algorithms. These algorithms can be trained on large datasets of past earthquake data and SOHO data, and can learn to identify patterns and correlations that may be indicative of future earthquakes. For example, certain patterns in SOHO data may be correlated with increased seismic activity, and machine learning algorithms can be used to identify these patterns and make predictions based on them.

Another way that AI can be used in earthquake prediction is through the development of predictive models. These models can be based on a variety of factors, such as the location, depth, and size of past earthquakes, as well as other factors such as the geology of the region and the presence of fault lines. By analyzing these factors, AI systems can make predictions about the likelihood of future earthquakes in a particular region.

In addition to machine learning and predictive modeling, AI can also be used in the analysis and interpretation of earthquake data. For example, AI systems can be used to analyze large amounts of data from sensors and other sources to identify patterns and trends that may be relevant to earthquake prediction.

Overall, the use of AI in earthquake prediction can help to improve the accuracy and reliability of these predictions, and can potentially help to save lives and minimize damage by allowing for more effective disaster preparedness and response efforts.


  1. “A study that links solar activity to earthquakes is sending shockwaves through the science world: Is solar weather correlated with earthquakes on Earth? A seismic brawl is brewing over a peer-reviewed paper”:
  1. The European Space Agency’s (ESA) SOHO mission website ( provides information about the mission and the data that is collected by SOHO.
  1. The United States Geological Survey (USGS) Earthquake Hazards Program ( provides data and information about earthquakes around the world, including maps and tools for analyzing and visualizing earthquake data.
  1. The International Association of Seismology and Physics of the Earth’s Interior (IASPEI) ( is an international organization that promotes research in seismology and related fields. They have a database of earthquakes and other geophysical data that may be useful for earthquake prediction research.
  1. The Southern California Earthquake Center (SCEC) ( is a consortium of universities and research institutions that conduct research on earthquakes and related phenomena in Southern California. They have a wealth of data and information on earthquakes in the region, as well as tools and resources for analyzing and visualizing earthquake data.
  1. The Seismological Society of America (SSA) ( is a professional society for seismologists and other earth scientists. They publish research on earthquakes and related topics, and have a database of earthquake data that may be useful for predicting earthquakes.
  1. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) ( is a government agency that maintains a database of geophysical data, including earthquake data. They have tools and resources for analyzing and visualizing this data, and for making predictions about earthquakes and other phenomena.
  1. The TensorFlow website ( is a great place to start if you are new to TensorFlow. It provides documentation, tutorials, and other resources for learning how to use TensorFlow to build machine learning models.
  1. The TensorFlow API documentation ( is a comprehensive resource that provides detailed information about how to use TensorFlow to build machine learning models.
  1. The TensorFlow tutorials ( provide step-by-step instructions for building machine learning models using TensorFlow.
  1. The TensorFlow examples ( provide code snippets and complete examples of machine learning models built using TensorFlow.

The Surveillance Kings

Posted in Uncategorized on November 8, 2022 by Michael Theroux

The Surveillance Kings

Who’s Really Behind Who’s Watching Us

by DocSlow

Originally published in 2600 Magazine ( 2014

Several years ago, I had been working on an article involving corporate computer security and how malware was changing the way companies approached security. I had conducted over 100 interviews with various computer security analysts from small companies to very large corporations. Most of these analysts simply related to me that they were too busy fighting on the malware front – both night and day, and had little time or no authority to actually analyze what was going on. Then, I met Brad (not his real name – he was afraid to speak publicly). Brad told me he had information that went far beyond the current story I was writing, and that if we could meet, he would show me all the evidence he had collected.

          Brad said that the story was not so much about malware, but rather about a developing surveillance project he uncovered, and the fact that it could be used like current malware to spy on anyone at any time. This story unfolded around 2005 and is only now relevant in light of all the recent whistle-blowing concerning the surveillance of everyone on the planet by certain governmental 3-letter orgs. Brad had some 4000 pages of accumulated documentation, all collected and stored on CDROMs.  Now, it has been almost ten years since this article was started, and recent events warrant that the story be told. 

Computer security was Brad’s main avocation for nearly 30 years – with malware forensics as his specialty. He was hired by a very large company to deal with a growing malware problem in the fall of 2005, and he was excited to do his job. He told me he had succumbed to the indoctrination offered him by the company (called “orientation”) and fully accepted their brand so as to be a part of what he assumed would be an elite group within the organization. The company was IBM.

Initially, Brad said that he and the new recruits that were hired with him were given tip-top laptop computers, installation CDs labeled “IBM Transitioner” with Microsoft XP at its core, and a stipend to set up their home offices. Brad jumped into the fray with both boots, eager to get started thwarting those whose intentions were to cause havoc within the company. Brad and the new recent hires went about setting up their machines to do the tasks they were assigned, and Brad noted that there were some curiosities with those laptops that immediately started to arise. There were two co-workers who were initially hired with Brad, and Brad said they were mostly unobservant of the anomalies that accompanied the new machines – they just assumed “the things were slow.” The first thing Brad noticed after he installed the “IBM Transitioner” OS CDs was that the CPU usage at idle was around 60%. The others mentioned that they did notice it, but declined to investigate why this was happening. Brad told me his first simple exploration into the anomaly was to observe what was happening to the XP OS with Sysinternal’s “Process Explorer.” It showed that an application on the hard drive entitled, “PC” was responsible for the excessive activity.

Brad then stated that he began to look in “Program Files” for the application, and it existed, but the activity of the CPU as presented in Process Explorer was curiously absent. He was sure the rest of this application should exist somewhere on the hard-drive. It didn’t. Brad related that his first assigned task with the company was to research the possibility of a viable BIOS malware application, and so he thought maybe that’s where it was residing – in the BIOS, but further investigation revealed it was simply installed on a hidden partition on the hard drive. The structure of the app was such that many calls were derived from the application’s base install, and then redirected to the hidden partition. WTF was going on here? Brad was able to access the apps being called on the hidden partition and found audio recording apps, video capture apps, screen capture apps, and keyloggers. Brad thought, “Great…what have I gotten myself into here?.” Brad wondered what was the purpose of these apps, and why were they being run without any interaction from the user? Brad then employed another Sysinternals app, and it would appear to reveal what was actually going on. Brad had installed and run “TCPView” on his assigned laptop and found that periodically, packets of the collected data were being sent to an IP address in Boulder, Colorado – a mainframe station for IBM. As he tracked the data transfer, it became apparent that the transfers were happening every five minutes. Apparently, IBM was spying on its employees.

Tasked with protecting the company’s some 300,000 employee computers from malware attacks, Brad brought his discovery to the attention of his new “superiors.” He assumed they would understand that this activity was a compromise to the real security of their systems. He was wrong. Brad was told they would get back to him shortly. Two days later they convened a meeting with Brad and told him not to speak of what he discovered, and that he would probably be terminated should he do so. Brad had already alerted a few coworkers that they should slap black electrical tape over the video cam, and insert a dummy phono plug in the external mic jack. They did so, and were soon approached by corporate goons to remove them – or else. Soon thereafter Brad was removed from the Malware Forensics program, and was relegated to a simple sysadmin position. 

IBM has a long and sordid history of nefarious data collecting practices in its background. Edwin Black, author of “IBM and the Holocaust” ( chronicled that the sale and implementation of the IBM Holerith machines significantly advanced Nazi efforts to exterminate Jews, and IBM has never once officially commented on the allegations prodigiously referenced in Black’s New York Times bestseller.

Black’s New York Times bestseller details the story of IBM’s strategic alliance with Nazi Germany. It is a chilling investigation into corporate complicity, and the atrocities witnessed raise startling questions that throw IBM’s wartime ethics into serious doubt. IBM and its subsidiaries helped create enabling technologies for the Nazis, step-by-step, from identification and cataloging programs of the 1930s to the selection processes of the 1940s. And guess what? Brad was aware of this and told me that he contacted Edwin Black. Black warned him to be careful if he ever related any of his experiences with the company. Shortly after Brad’s encounter with his corporate controllers, he told me he quit IBM. 

“One of the guys I worked closely with on the ‘team’ was fired within days of my resignation,” Brad said. 

“I called him and we chatted about all of this. Initially, he was quite keen on exposing the old guard. A few days later, when I spoke to him on the phone, he stated he wanted no more to do with me….and hung up on me. I never spoke to him again.”

What had become clear to Brad soon after having left the company, and after analyzing all of the data he had collected, was that IBM was developing and perfecting a surveillance program – not simply for spying on employees – but for spying on US citizens as a whole. IBM’s interconnectivity with DARPA and hints at the company’s capabilities with respect to their surveillance abilities were curiously, mostly public. It can be easily looked up on their website. Their perfection of early data mining practices had evolved over several decades into applications that could watch over all activities of the general public. Already, private commercial applications were being offered for sale to companies to spy on their employees, and Human Resources divisions across most corporate entities embraced them wholeheartedly. Brad said he has been asked at many of the companies he has worked at to spy on employees and covertly record their computer doings on a very regular basis.

One of the spookiest things Brad told me at the time was that he had uncovered a completely proprietary operating system developed by IBM that almost perfectly mimicked the Microsoft OS on its surface, but that it secretly contained all the surveillance applications noted above – and it was being tested on employees and civilians alike. I asked him how he thought it could be unsuspectingly delivered to the public. Brad said he had evidence that it was actually delivered in real OS security updates, and it could entirely replace the real OS!

I recently contacted Brad (he’s doing well with his own company now) and asked him after all these years what his thoughts were concerning his experiences. 

“With recent allegations that the US Government has implemented programs to spy on its citizens without any accountability, this information finally has some credibility.” Brad then stated, “This technology was being developed long ago, and has now been perfected by all of the giant tech corporations most of us think of as friends of new technology.” I asked Brad if he had kept up on the technology and if he had seen any new developments thereof. He stated that, “Yes, it’s far better than it used to be. Back in 2005 it was being tested only – now it has been widely implemented, and has been ported to many other operating systems. No one is safe from it. The kings of surveillance are all around us, and there’s no going back.”

Time Stand Still…

Posted in Uncategorized on March 15, 2022 by Michael Theroux

Time Stand Still…

My watchmaker’s desk

Foliotroves on Etsy

Posted in Uncategorized on January 7, 2021 by Michael Theroux
Foliotroves started out as an antiquarian bookseller in 2005. Foliotroves specializes in books on the paranormal, the occult, alternative science, and alternative medicine.

In 2019, Foliotroves began publishing new books whose topics reflect but are not limited to those above.