Simulated Life – A Concise History of Thought on the Possibility that We are Living in a Simulation

Posted in Uncategorized on January 3, 2023 by Michael Theroux

Simulated Life – A Concise History of Thought on the Possibility that We are Living in a Simulation

by Michael Theroux

Introduction:

The concept of simulation theory proposes that our reality is not real, but rather a computer-generated simulation created by a highly advanced civilization. This theory has gained significant attention in recent years, with proponents arguing that it explains various mysteries and anomalies in our world. However, the theory remains controversial and is met with skepticism by many in the scientific community.

In this research paper, we will explore the history and origins of simulation theory, examine the evidence that supports the theory, and consider the implications of the theory for our understanding of reality. We will also examine the criticisms of simulation theory and consider whether it is a viable explanation for the nature of our reality.

History and Origins of Simulation Theory:

The idea that we may be living in a simulated reality can be traced back to ancient philosophers like Plato, who proposed the concept of the “Allegory of the Cave.” 

In the 20th century, philosopher Nick Bostrom proposed the “Simulation Argument,” which suggests that it is highly likely that we are living in a simulated reality. Bostrom’s argument is based on the idea that if advanced civilizations reach a point where they are able to create highly realistic simulations of their ancestors, it is likely that they would do so. Therefore, the probability of us living in a simulated reality is high if we assume that there are advanced civilizations in the universe.

Evidence for Simulation Theory:

There are several pieces of evidence that have been used to support the idea of simulation theory. One of the most commonly cited pieces of evidence is the concept of the “Mandela Effect.” (See examples)

Another piece of evidence cited by proponents of simulation theory is the concept of quantum mechanics and the idea that reality is not fixed and can be influenced by our observations and actions. This suggests that the reality we experience may not be the true reality, but rather a constructed one that is influenced by our perceptions.

Here are 10 examples in history regarding the idea that we are living in a simulation:

  1. Plato’s “Allegory of the Cave” – In this allegory, Plato suggests that the world we see and experience is just a shadow of the true reality.
  1. Descartes’ “Cogito, ergo sum” – In his philosophical work, Descartes proposed the idea that the only thing we can be certain of is our own consciousness, leading to the possibility that everything else is an illusion.
  1. The “Brain in a Vat” thought experiment – This thought experiment, proposed by philosophers Hilary Putnam and Saul Kripke, suggests that we could be brains in a vat being fed illusions of a reality.
  1. Nick Bostrom’s “Simulation Argument” – Bostrom’s argument suggests that it is highly likely that we are living in a simulated reality if we assume that there are advanced civilizations in the universe that can create realistic simulations of their ancestors.
  1. The “Mandela Effect” – This phenomenon, in which large groups of people remember events or details differently than they actually occurred, could be explained by the idea that our memories are being altered by the simulation.
  1. The concept of quantum mechanics – The idea that reality is not fixed and can be influenced by our observations and actions suggests that the reality we experience may not be the true reality, but rather a constructed one influenced by our perceptions.
  1. The “Matrix” movies – These movies explore the concept of a simulated reality in which humans are unknowingly living in a computer-generated world.
  1. The “Westworld” TV show – This show centers on a theme park where visitors can interact with robots in a simulated Wild West setting, leading to the question of whether the robots’ experiences and emotions are real or just programmed responses.
  1. The “Ready Player One” novel and movie – This story explores the concept of a virtual reality world in which people can escape their mundane lives and live out their wildest dreams.
  1. The concept of virtual reality – The development of virtual reality technology has led to the question of whether it is possible to create a simulated reality that is indistinguishable from the real world.

Implications of Simulation Theory:

If we are indeed living in a simulated reality, what does this mean for our understanding of the world and our place in it? One of the most significant implications of simulation theory is that it challenges our understanding of free will and determinism. If we are just characters in a program, are our actions and choices predetermined or do we have the ability to make our own choices and determine our own path?

Simulation theory also raises questions about the nature of consciousness and whether it is something that can be simulated. If we are just characters in a program, does that mean that our experiences and emotions are not real?

Criticisms of Simulation Theory:

Simulation theory is met with skepticism by many in the scientific community, who argue that there is not sufficient evidence to support the idea that we are living in a simulated reality. Some critics argue that the theory relies on assumptions about the capabilities of advanced civilizations and is not based on empirical evidence.

Additionally, simulation theory does not offer a satisfactory explanation for how a simulated reality could be created or how it could be sustained. It is unclear how a simulation of such complexity could be created and maintained, and there is no evidence to suggest that it is possible.

References:

  1. ChatGPT – https://chat.openai.com/ Yes, it already knows. ;-)
  2. My research – Yes, I wrote much of this article.

Superluminal Biological Communications

Posted in Uncategorized on December 31, 2022 by Michael Theroux

Superluminal Biological Communications

By Michael Theroux

Many years ago, I wrote a book called “Biological Communications (see references below). The focus of the book centered on the work of L. George Lawrence in the 1960s, and his research on the potential superluminal communications of biological organisms. With a little help from my AI friends and ChatGPT, we can now sort out a bit of this research.

Biological communication refers to the process by which living organisms transmit information to one another through various means, such as chemical signals, sound, or visual signals. These forms of communication can be essential for survival and reproduction, as they allow organisms to coordinate their behaviors and exchange information about their environment.

One interesting area of study within the field of biological communication is the use of superluminal (or faster-than-light) transmission by some organisms. Superluminal transmission refers to the ability to transmit information at speeds that exceed the speed of light, which is considered the maximum speed at which information can travel according to the laws of physics.

There are several examples of superluminal transmission in the natural world, although the mechanisms by which these phenomena occur are not yet fully understood. One well-known example is the process of quorum sensing, which is used by some bacteria to communicate and coordinate their behaviors. Quorum sensing involves the release of chemical signals called autoinducers, which can be detected by other bacteria and trigger a response. Some studies have suggested that quorum sensing may occur at speeds that are faster than the speed of light, although these claims are controversial and have not been widely accepted.

Other examples of superluminal transmission in nature include the ability of some animals to communicate using ultrasound, which is sound waves at frequencies higher than the range of human hearing. Some bats, for example, use ultrasound to navigate and locate prey, and some whales and dolphins use it for communication and echolocation. The mechanisms by which these animals are able to produce and detect ultrasound are not fully understood, and it is possible that they may involve some form of superluminal transmission.

There is also some evidence that plants may be able to communicate using methods that involve superluminal transmission. For example, some studies have suggested that plants may be able to sense the presence of other plants and respond to their needs through the release of chemical signals. The mechanisms by which these signals are transmitted and detected are not well understood, and further research is needed to confirm the existence and nature of these phenomena.

In conclusion, superluminal transmission is a fascinating and poorly understood aspect of biological communication that has the potential to shed light on the ways in which living organisms interact and communicate with one another. Further research is needed to better understand the mechanisms by which superluminal transmission occurs and the ways in which it is used by different organisms.

Reference:

Biological Communications – Selected Articles, Experiments, and Patent Designs.

https://www.etsy.com/listing/922187162/biological-communications-by-michael

Using AI to Compare Historical SOHO Satellite Data with Historical Terrestrial Earthquake Data for Prediction Analysis

Posted in Uncategorized on December 27, 2022 by Michael Theroux

Using AI to Compare Historical SOHO Satellite Data with Historical Terrestrial Earthquake Data for Prediction Analysis

By Michael Theroux

INTRODUCTION

Predicting earthquakes can be a complex task that requires a thorough understanding of the underlying geophysical processes that lead to earthquakes and the development of sophisticated mathematical models that can accurately represent these processes. While it is possible that certain types of solar activity, such as solar flares or coronal mass ejections, could have some influence on the Earth’s geomagnetic field and potentially contribute to the occurrence of earthquakes, we should state that solar data alone may not be sufficient to accurately predict earthquakes.

In order to predict earthquakes, we would typically rely on a combination of data from various sources, including geological, geophysical, and geodetic data, as well as data from seismographic and geodetic networks. These data are used to develop models that can identify patterns and trends in the occurrence of earthquakes and use these patterns to make predictions about future earthquakes.

One approach that has been used to predict earthquakes is the application of machine learning techniques, such as neural networks, to large datasets of seismographic and geodetic data. These techniques can help to identify patterns and correlations in the data that may be indicative of impending earthquakes, and can be used to make predictions about the likelihood and magnitude of future earthquakes.

Overall, while it is certainly possible to use SOHO solar data in conjunction with other types of data to try to predict earthquakes, it is probable that this approach may be insufficient on its own to accurately predict earthquakes. Instead, a more comprehensive and multidisciplinary approach, incorporating data from a wide range of sources and utilizing advanced modeling techniques, is likely to be more effective in accurately predicting earthquakes.

SOHO

The SOHO satellite (short for “Solar and Heliospheric Observatory”) is a spacecraft that was launched by NASA and the European Space Agency (ESA) in 1995 to study the sun and its effects on the solar system. The SOHO satellite collects a wide variety of data on the sun and solar system, including images, spectra, and other types of data that are used to understand the sun’s behavior and the impacts it has on the earth.

Some of the specific types of data that the SOHO satellite collects include:

Images of the sun: The SOHO satellite has several instruments that are used to capture images of the sun, including the Extreme ultraviolet Imaging Telescope (EIT) and the Large Angle and Spectrometric Coronagraph (LASCO). These instruments capture images of the sun’s surface and atmosphere, allowing us to study the sun’s features, such as sunspots and solar flares. The EIT instrument captures images of the sun in four different wavelengths of extreme ultraviolet light, while the LASCO instrument captures images of the sun’s corona (the outer atmosphere of the sun). These images are used to study the sun’s magnetic field, solar winds, and other features that are important for understanding its behavior.

Spectra of the sun: The SOHO satellite has several instruments that are used to analyze the sun’s light and measure its composition, including the Solar Ultraviolet Measurements of Emitted Radiation (SUMER) and the Coronal Diagnostic Spectrometer (CDS). These instruments capture spectra of the sun’s light, allowing us to study the sun’s chemical composition and understand how it produces and releases energy. The SUMER instrument captures spectra of the sun’s ultraviolet light, while the CDS instrument captures spectra of the sun’s visible and ultraviolet light. These spectra are used to study the sun’s temperature, density, and other characteristics that are important for understanding its behavior.

Data on solar wind and solar particles: The SOHO satellite has several instruments that are used to measure the flow of solar wind and solar particles from the sun, including the Solar Wind Anisotropies (SWAN) and the Solar and Heliospheric Observatory/Charge, Element, and Isotope Analysis System (CELIAS/MTOF). These instruments measure the velocity, density, and composition of the solar wind and solar particles, allowing us to study how the sun affects the rest of the solar system. The SWAN instrument measures the solar wind by detecting the hydrogen atoms that are present in it, while the CELIAS/MTOF instrument measures the solar particles by analyzing their charge, element, and isotope composition. These data are used to study the sun’s magnetic field, solar winds, and other features that are important for understanding its behavior.

Overall, the SOHO satellite collects a wide variety of data on the sun and solar system, providing us with a wealth of information that is used to understand the sun’s behavior and the impacts it has on the earth. This data is used to study the sun’s magnetic field, solar winds, and other features that are important for understanding its behavior, as well as to understand how the sun affects the rest of the solar system.

TENSORFLOW

TensorFlow is an open-source software library for machine learning and artificial intelligence. It was developed by Google and is widely used in industry, academia, and research to build and deploy machine learning models.

At its core, TensorFlow is a library for performing numerical computation using data flow graphs. A data flow graph is a directed graph in which nodes represent mathematical operations and edges represent the flow of data between those operations. TensorFlow uses this data flow graph to perform machine learning tasks, such as training and evaluating machine learning models.

TensorFlow allows users to define and execute data flow graphs using a high-level API, making it easy to build and deploy machine learning models. It also includes a number of tools and libraries for tasks such as data preprocessing, visualization, and optimization, making it a comprehensive toolkit for machine learning and artificial intelligence.

One of the main strengths of TensorFlow is its ability to run on a variety of platforms, including CPUs, GPUs, and TPUs (tensor processing units). This allows users to easily scale their machine-learning models to handle large amounts of data and perform complex tasks.

TensorFlow is widely used in a variety of applications, including image and speech recognition, natural language processing, and machine translation. It is also used in many research projects, making it a popular choice for machine learning and artificial intelligence research.

Using TensorFlow to compare historical SOHO satellite data with historical earthquake data can be a powerful way to uncover insights and patterns that may not be immediately obvious. The following will explore the various ways in which TensorFlow can be used to compare these two types of data and identify any potential relationships between them.

To begin, we need to gather and prepare the data for analysis. This will involve obtaining the SOHO satellite data and earthquake data, and then formatting it in a way that is suitable for use with TensorFlow.

There are several sources of SOHO satellite data that can be used for this purpose, including NASA’s SOHO website and the European Space Agency’s SOHO data archive. These sources provide a wealth of data on the sun and solar system, including images, spectra, and other types of data that can be used to understand the sun’s behavior and the impacts it has on the earth.

Similarly, there are several sources of earthquake data that can be used for this analysis, including the US Geological Survey’s earthquake database and the Global Earthquake Model’s OpenQuake engine. These sources provide data on earthquakes around the world, including information on the location, magnitude, and other characteristics of each earthquake.

Once we have obtained the SOHO satellite data and earthquake data, we need to format it in a way that is suitable for use with TensorFlow. This may involve cleaning and preprocessing the data, as well as selecting specific subsets of the data to use for the analysis. It is also necessary to extract features from the data that can be used to identify patterns and correlations between the two datasets.

Now that we have the data prepared and ready for analysis, we use TensorFlow to build a model that compares the two datasets and identifies any correlations or patterns between them. This involves using a variety of techniques, such as deep learning, machine learning, or statistical analysis, depending on the specific goals of the analysis and the characteristics of the data.

We can use TensorFlow’s deep learning capabilities to build a neural network that takes the SOHO satellite data and earthquake data as input and outputs a prediction of the likelihood of an earthquake occurring. By training the model on a large dataset of historical data, we fine-tune the model to accurately predict earthquakes based on the SOHO satellite data.

We can also use TensorFlow’s machine learning algorithms to identify patterns in the data and identify any potential correlations between the SOHO satellite data and earthquake data. This involves using techniques such as clustering, classification, or regression to analyze the data and identify any trends or patterns.

The key to using TensorFlow to compare historical SOHO satellite data with historical earthquake data is to carefully select and prepare the data, and then use the appropriate techniques to analyze and interpret the results. With the right tools and techniques, it is possible to uncover valuable insights and patterns that can help us better understand the relationships between these two types of data.

The following code will first load the SOHO solar data and the earthquake data using the tf.keras.utils.get_file function, which downloads the data from the specified URLs and saves it locally. It then uses the pd.merge function from the Pandas library to merge the two datasets on the ‘timestamp’ column. Finally, it uses a for loop to compare the ‘solar_activity’ and ‘magnitude’ columns from the two datasets by creating scatter plots using Matplotlib’s plt.scatter function.

import tensorflow as tf

# Load the SOHO solar data

SOHO_solar_data = tf.keras.utils.get_file(

    ‘SOHO_solar_data.csv’, ‘http://example.com/path/to/SOHO_solar_data.csv‘)

SOHO_solar_data = pd.read_csv(SOHO_solar_data)

# Load the earthquake data

earthquake_data = tf.keras.utils.get_file(

    ‘earthquake_data.csv’, ‘http://example.com/path/to/earthquake_data.csv‘)

earthquake_data = pd.read_csv(earthquake_data)

# Merge the two datasets on the ‘timestamp’ column

merged_data = pd.merge(SOHO_solar_data, earthquake_data, on=’timestamp’)

# Compare the two datasets

comparison_columns = [‘solar_activity’, ‘magnitude’]

for column in comparison_columns:

    plt.scatter(merged_data[column + ‘_x’], merged_data[column + ‘_y’])

    plt.xlabel(column + ‘ (SOHO solar data)’)

    plt.ylabel(column + ‘ (earthquake data)’)

    plt.show()

If we want to use TensorFlow to graph a function, we can use the tf.function decorator to define a TensorFlow function that represents the mathematical function we want to graph. We can then use TensorFlow’s math operations and functions to define the calculations needed to evaluate the function, and use TensorFlow’s tf.Session or tf.keras.backend.eval to execute the function and generate the output.

Here is an example of how we can use TensorFlow to graph the function y = x^2 + 2x + 1:

import tensorflow as tf

import numpy as np

import matplotlib.pyplot as plt

# Define the function using the @tf.function decorator

@tf.function

def quadratic_function(x):

  return tf.add(tf.add(tf.square(x), tf.multiply(x, 2)), 1)

# Generate the input values

x = np.linspace(-10, 10, 100)

# Evaluate the function

y = quadratic_function(x)

# Plot the function

plt.plot(x, y)

plt.xlabel(‘x’)

plt.ylabel(‘y’)

plt.show()

DATA

The data sets of SOHO data and earthquake data need to be quite large to actually perform a prediction using tensorflow: the size of these datasets is important.

The size of the data sets needed to perform a prediction using TensorFlow will depend on a number of factors, including the complexity of the model we are using, the amount of data available for training and testing, and the quality of the data. In general, the larger the data sets and the more diverse and representative they are, the better the model will be able to generalize and make accurate predictions.

For example, if we are using a simple linear model to predict earthquakes based on solar activity, we may be able to achieve good results with a relatively small data set. However, if we are using a more complex model, such as a deep neural network, we may need a larger data set in order to achieve good results.

It is generally recommended to start with a large enough data set that we can split it into training, validation, and test sets, and to use cross-validation techniques to evaluate the performance of our model. This will help us to determine how well our model is able to generalize to new data and identify any issues that may need to be addressed.

CONCLUSION

Artificial intelligence (AI) has the potential to play a significant role in the development of predicting earthquakes using data from the Solar and Heliospheric Observatory (SOHO) and other sources.

One way that AI can be used in this context is through the application of machine learning algorithms. These algorithms can be trained on large datasets of past earthquake data and SOHO data, and can learn to identify patterns and correlations that may be indicative of future earthquakes. For example, certain patterns in SOHO data may be correlated with increased seismic activity, and machine learning algorithms can be used to identify these patterns and make predictions based on them.

Another way that AI can be used in earthquake prediction is through the development of predictive models. These models can be based on a variety of factors, such as the location, depth, and size of past earthquakes, as well as other factors such as the geology of the region and the presence of fault lines. By analyzing these factors, AI systems can make predictions about the likelihood of future earthquakes in a particular region.

In addition to machine learning and predictive modeling, AI can also be used in the analysis and interpretation of earthquake data. For example, AI systems can be used to analyze large amounts of data from sensors and other sources to identify patterns and trends that may be relevant to earthquake prediction.

Overall, the use of AI in earthquake prediction can help to improve the accuracy and reliability of these predictions, and can potentially help to save lives and minimize damage by allowing for more effective disaster preparedness and response efforts.

RESOURCES

  1. “A study that links solar activity to earthquakes is sending shockwaves through the science world: Is solar weather correlated with earthquakes on Earth? A seismic brawl is brewing over a peer-reviewed paper”: https://www.salon.com/2020/07/21/a-study-that-links-solar-activity-to-earthquakes-is-sending-shockwaves-through-the-science-world/
  1. The European Space Agency’s (ESA) SOHO mission website (https://www.esa.int/Our_Activities/Space_Science/SOHO) provides information about the mission and the data that is collected by SOHO.
  1. The United States Geological Survey (USGS) Earthquake Hazards Program (https://earthquake.usgs.gov/) provides data and information about earthquakes around the world, including maps and tools for analyzing and visualizing earthquake data.
  1. The International Association of Seismology and Physics of the Earth’s Interior (IASPEI) (https://www.iaspei.org/) is an international organization that promotes research in seismology and related fields. They have a database of earthquakes and other geophysical data that may be useful for earthquake prediction research.
  1. The Southern California Earthquake Center (SCEC) (https://www.scec.org/) is a consortium of universities and research institutions that conduct research on earthquakes and related phenomena in Southern California. They have a wealth of data and information on earthquakes in the region, as well as tools and resources for analyzing and visualizing earthquake data.
  1. The Seismological Society of America (SSA) (https://www.seismosoc.org/) is a professional society for seismologists and other earth scientists. They publish research on earthquakes and related topics, and have a database of earthquake data that may be useful for predicting earthquakes.
  1. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) (https://www.ncei.noaa.gov/) is a government agency that maintains a database of geophysical data, including earthquake data. They have tools and resources for analyzing and visualizing this data, and for making predictions about earthquakes and other phenomena.
  1. The TensorFlow website (https://www.tensorflow.org/) is a great place to start if you are new to TensorFlow. It provides documentation, tutorials, and other resources for learning how to use TensorFlow to build machine learning models.
  1. The TensorFlow API documentation (https://www.tensorflow.org/api_docs) is a comprehensive resource that provides detailed information about how to use TensorFlow to build machine learning models.
  1. The TensorFlow tutorials (https://www.tensorflow.org/tutorials) provide step-by-step instructions for building machine learning models using TensorFlow.
  1. The TensorFlow examples (https://www.tensorflow.org/examples) provide code snippets and complete examples of machine learning models built using TensorFlow.

The Surveillance Kings

Posted in Uncategorized on November 8, 2022 by Michael Theroux

The Surveillance Kings

Who’s Really Behind Who’s Watching Us

by DocSlow

Originally published in 2600 Magazine (https://www.2600.com/) 2014

Several years ago, I had been working on an article involving corporate computer security and how malware was changing the way companies approached security. I had conducted over 100 interviews with various computer security analysts from small companies to very large corporations. Most of these analysts simply related to me that they were too busy fighting on the malware front – both night and day, and had little time or no authority to actually analyze what was going on. Then, I met Brad (not his real name – he was afraid to speak publicly). Brad told me he had information that went far beyond the current story I was writing, and that if we could meet, he would show me all the evidence he had collected.

          Brad said that the story was not so much about malware, but rather about a developing surveillance project he uncovered, and the fact that it could be used like current malware to spy on anyone at any time. This story unfolded around 2005 and is only now relevant in light of all the recent whistle-blowing concerning the surveillance of everyone on the planet by certain governmental 3-letter orgs. Brad had some 4000 pages of accumulated documentation, all collected and stored on CDROMs.  Now, it has been almost ten years since this article was started, and recent events warrant that the story be told. 

Computer security was Brad’s main avocation for nearly 30 years – with malware forensics as his specialty. He was hired by a very large company to deal with a growing malware problem in the fall of 2005, and he was excited to do his job. He told me he had succumbed to the indoctrination offered him by the company (called “orientation”) and fully accepted their brand so as to be a part of what he assumed would be an elite group within the organization. The company was IBM.

Initially, Brad said that he and the new recruits that were hired with him were given tip-top laptop computers, installation CDs labeled “IBM Transitioner” with Microsoft XP at its core, and a stipend to set up their home offices. Brad jumped into the fray with both boots, eager to get started thwarting those whose intentions were to cause havoc within the company. Brad and the new recent hires went about setting up their machines to do the tasks they were assigned, and Brad noted that there were some curiosities with those laptops that immediately started to arise. There were two co-workers who were initially hired with Brad, and Brad said they were mostly unobservant of the anomalies that accompanied the new machines – they just assumed “the things were slow.” The first thing Brad noticed after he installed the “IBM Transitioner” OS CDs was that the CPU usage at idle was around 60%. The others mentioned that they did notice it, but declined to investigate why this was happening. Brad told me his first simple exploration into the anomaly was to observe what was happening to the XP OS with Sysinternal’s “Process Explorer.” It showed that an application on the hard drive entitled, “PC” was responsible for the excessive activity.

Brad then stated that he began to look in “Program Files” for the application, and it existed, but the activity of the CPU as presented in Process Explorer was curiously absent. He was sure the rest of this application should exist somewhere on the hard-drive. It didn’t. Brad related that his first assigned task with the company was to research the possibility of a viable BIOS malware application, and so he thought maybe that’s where it was residing – in the BIOS, but further investigation revealed it was simply installed on a hidden partition on the hard drive. The structure of the app was such that many calls were derived from the application’s base install, and then redirected to the hidden partition. WTF was going on here? Brad was able to access the apps being called on the hidden partition and found audio recording apps, video capture apps, screen capture apps, and keyloggers. Brad thought, “Great…what have I gotten myself into here?.” Brad wondered what was the purpose of these apps, and why were they being run without any interaction from the user? Brad then employed another Sysinternals app, and it would appear to reveal what was actually going on. Brad had installed and run “TCPView” on his assigned laptop and found that periodically, packets of the collected data were being sent to an IP address in Boulder, Colorado – a mainframe station for IBM. As he tracked the data transfer, it became apparent that the transfers were happening every five minutes. Apparently, IBM was spying on its employees.

Tasked with protecting the company’s some 300,000 employee computers from malware attacks, Brad brought his discovery to the attention of his new “superiors.” He assumed they would understand that this activity was a compromise to the real security of their systems. He was wrong. Brad was told they would get back to him shortly. Two days later they convened a meeting with Brad and told him not to speak of what he discovered, and that he would probably be terminated should he do so. Brad had already alerted a few coworkers that they should slap black electrical tape over the video cam, and insert a dummy phono plug in the external mic jack. They did so, and were soon approached by corporate goons to remove them – or else. Soon thereafter Brad was removed from the Malware Forensics program, and was relegated to a simple sysadmin position. 

IBM has a long and sordid history of nefarious data collecting practices in its background. Edwin Black, author of “IBM and the Holocaust” (http://www.ibmandtheholocaust.com/) chronicled that the sale and implementation of the IBM Holerith machines significantly advanced Nazi efforts to exterminate Jews, and IBM has never once officially commented on the allegations prodigiously referenced in Black’s New York Times bestseller.

Black’s New York Times bestseller details the story of IBM’s strategic alliance with Nazi Germany. It is a chilling investigation into corporate complicity, and the atrocities witnessed raise startling questions that throw IBM’s wartime ethics into serious doubt. IBM and its subsidiaries helped create enabling technologies for the Nazis, step-by-step, from identification and cataloging programs of the 1930s to the selection processes of the 1940s. And guess what? Brad was aware of this and told me that he contacted Edwin Black. Black warned him to be careful if he ever related any of his experiences with the company. Shortly after Brad’s encounter with his corporate controllers, he told me he quit IBM. 

“One of the guys I worked closely with on the ‘team’ was fired within days of my resignation,” Brad said. 

“I called him and we chatted about all of this. Initially, he was quite keen on exposing the old guard. A few days later, when I spoke to him on the phone, he stated he wanted no more to do with me….and hung up on me. I never spoke to him again.”

What had become clear to Brad soon after having left the company, and after analyzing all of the data he had collected, was that IBM was developing and perfecting a surveillance program – not simply for spying on employees – but for spying on US citizens as a whole. IBM’s interconnectivity with DARPA and hints at the company’s capabilities with respect to their surveillance abilities were curiously, mostly public. It can be easily looked up on their website. Their perfection of early data mining practices had evolved over several decades into applications that could watch over all activities of the general public. Already, private commercial applications were being offered for sale to companies to spy on their employees, and Human Resources divisions across most corporate entities embraced them wholeheartedly. Brad said he has been asked at many of the companies he has worked at to spy on employees and covertly record their computer doings on a very regular basis.

One of the spookiest things Brad told me at the time was that he had uncovered a completely proprietary operating system developed by IBM that almost perfectly mimicked the Microsoft OS on its surface, but that it secretly contained all the surveillance applications noted above – and it was being tested on employees and civilians alike. I asked him how he thought it could be unsuspectingly delivered to the public. Brad said he had evidence that it was actually delivered in real OS security updates, and it could entirely replace the real OS!

I recently contacted Brad (he’s doing well with his own company now) and asked him after all these years what his thoughts were concerning his experiences. 

“With recent allegations that the US Government has implemented programs to spy on its citizens without any accountability, this information finally has some credibility.” Brad then stated, “This technology was being developed long ago, and has now been perfected by all of the giant tech corporations most of us think of as friends of new technology.” I asked Brad if he had kept up on the technology and if he had seen any new developments thereof. He stated that, “Yes, it’s far better than it used to be. Back in 2005 it was being tested only – now it has been widely implemented, and has been ported to many other operating systems. No one is safe from it. The kings of surveillance are all around us, and there’s no going back.”

Time Stand Still…

Posted in Uncategorized on March 15, 2022 by Michael Theroux

Time Stand Still…

My watchmaker’s desk

Foliotroves on Etsy

Posted in Uncategorized on January 7, 2021 by Michael Theroux
Foliotroves started out as an antiquarian bookseller in 2005. Foliotroves specializes in books on the paranormal, the occult, alternative science, and alternative medicine.

In 2019, Foliotroves began publishing new books whose topics reflect but are not limited to those above.

https://www.etsy.com/shop/FolioTroves

MacRoux 432

Posted in Uncategorized on December 12, 2020 by Michael Theroux

On the Q.Psience Project show tonight!

Posted in Uncategorized on November 13, 2020 by Michael Theroux

Michael Theroux on the Q.PSIENCE Project Radio Show

Posted in Uncategorized on October 24, 2020 by Michael Theroux

Michael Theroux: Meetings with Remarkable Kooks

Friday, November 13, 2020

10:00pm-12:00am EST

Q.PSIENCE Project

I’ll be a guest on the show, Q.PSIENCE with Jill Hanson. We’ll be talking about my latest book, “Meetings with Remarkable Kooks” Tune in for some fun!

https://www.qpsience.org/

Ground – The Documentary

Posted in Uncategorized on October 24, 2020 by Michael Theroux

In June of 2020, amid the Coronavirus pandemic, I had a stroke. It happened suddenly and seemingly from nowhere. The event was a wake-up call on many levels and made me consider my very existence and purpose. As a relatively health-conscious person, it was a complete shock to me, and I became preoccupied with figuring out why this could happen to me. I also felt compelled to do a few things I had wanted to do for some time. One was to hike one of the long National Scenic and Historic Trails such as the Appalachian Trail or the Pacific Crest Trail and create a documentary film on the adventure. Since I live in Wisconsin, I decided to do the 1200-mile IceAge Trail. This documentary will not only log my journey along the IceAge Trail but will chronicle my research on strokes and hopefully help other stroke victims in their recovery from this life-threatening incident. If you would like to support this project, please go to:

https://www.facebook.com/GroundDocumentary/
https://www.patreon.com/michaeltheroux