AI: How Afraid Should We Be? | Mark Davidson

April 11, 2023

419 NWPM part 2
Profiting in the Coming Economic Collapse, Part 2
418 NWPM part 1
Profiting in the Coming Economic Collapse, Part 1
422-IWR-Week-of-Nov-3rd
IWR News for November 3rd: Are We on the Brink of WW3?
417-IWR-Week-of-October-27
IWR News for October 27th: Did Trudeau Pay the RCMP Not to Investigate SNC Lavalin?
414 Blacklock's Tom Korski
Blacklock’s Reporter: Watching Our Government: Tom Korski
412 Cris and Jody UN Summit
Inside the U.N. Sustainable Development Goals Summit: Jody Ledgerwood and Cris Vleck
415-IWR-Week-of-October-20th
IWR Weekly Oct. 20th: Trudeau and Guilbeault, “Curses, Foiled Again”; C-69 Defeated
410 James Roguski and Valerie Borek on the WHO, part 2
The IHR Amendments, Part 2: James Roguski and Valerie Borek

Mainstream media is rife with stories about AI, artificial intelligence. These stories run the gamut, from AI will solve all of our problems, to AI will spell the end of humanity.

But they never bother to explore what AI actually is.

Can it ‘think’? What does the term ‘think’ in this context even mean? Is AI sentient, that is, is it self-aware? Does it have a conscience, or could it develop one?

Many of us in the freedom movement are very concerned about the use that the globalists have for AI, as a tool to surveille and control us all. And certainly it could do that, in fact, in some ways it already is.

But what can actually be done with it? Given that the globalists certainly do not have our best interests in mind, it’s important for us to understand the capabilities, and the limitations, of this technology.

Mark Davidson has spent much of his life studying the technologies surrounding AI. He has a degree in Electronic Engineering, Telecommunications and Computing from the University of Essex in England. He invented a virtual machine computing language, and he worked for AT&T and Compaq as an expert consultant in the field of computer science as it relates to human cognition. He has also worked on financial systems where he built financial models and explored AI’s for automated trading.

Mark joins me today to explain exactly what AI is, and what it is not. What it can do already, and what it might be able to do in the future.

Should we be afraid of AI? The short answer is yes, and by the end of this interview, you will not only understand why, but just how afraid we should be.

For Full Interviews Subscribe to:

IronWillReport.com

Will Dove  00:00

Mainstream media is rife with stories about AI, artificial intelligence. These stories run the gamut from AI will solve all of our problems to AI will spell the end of humanity. But they never bothered to explore what AI actually is. Can it think? What does the term think in this context even mean? Is AI sentient? That is, is it self aware? Does it have a conscience? Or could it develop one? Many of us in the freedom movement are very concerned about the use that the globalists have for AI as a tool to surveil and control us all. And certainly, it could do that. In fact, in some ways it already is. But what can actually be done with it? Given that the globalist certainly do not have our best interests in mind, it’s important for us to understand the capabilities and the limitations of this technology.  Mark Davidson has spent much of his life studying the technology surrounding AI. He has a degree in electronic engineering, telecommunications and computing from the University of Essex in England. He invented a virtual machine computing language, and he worked for AT & T and Compaq as an expert consultant in the field of computer science as it relates to human cognition. He has also worked on financial systems, where he built financial models and explored AI for automated trading. Mark joins me today to explain exactly what AI is and what it is not. What it can do already, and what it might be able to do in the future. Should we be afraid of AI? The short answer is yes. And by the end of this interview, you will not only understand why, but just how afraid we should be.

 

Mark Davidson  01:59

Mark, welcome to the show.  Thanks, Will.

 

Will Dove  02:03

Now, I was thrilled when I made contact with you. And you and I had a really long conversation a couple of weeks ago, where you filled in some gaps in my own knowledge. And so I think where we need to start with this discussion about AI, is for you to tell our viewers what AI is and what it isn’t.

 

Mark Davidson  02:21

Okay, well, yeah, I do feel the moment look, reading things in the media, etc, that there is a sort of attempt to imply that if we talk about this chatGPT, or these are our current language, API’s that they are somehow some form of consciousness or alive. I mean, in that way, but they are simply if you take a calculator or a computer, they are just the next incarnation of technology, really. So the real question is, well, what is technology? What, how does a computer work? How does any of the types of technology work, which is complicated? But if you want, we can go into that a little bit because I do feel like most people probably don’t have any understanding of how that technology works.

 

Will Dove  03:13

And I know we’ll get there, we’ll get there. But for for now, I think what we need to show people is that there’s a difference between an artificial intelligence and a self aware of sentience and they’re not remotely the same thing.

 

Mark Davidson  03:25

Yeah, I mean, it’s a very a, yeah, it’s a very controversial area. If you look into it, you’ll get sort of fudging statements about well, philosophers and scientists are arguing about that. So it depends what hat, think of it as a hat. What perspective but if we’re gonna put a science perspective on it, sentience imply some level of self awareness. Right. So I think one of the things I mentioned my notes is this idea of a Mirror test. There’s also the Turing test, but neither of those are really tests for sentience. So the mirror test is when an animal looks in the mirror, does it think it’s another animal or does it see itself? Right? What’s happening is people are seeing sentience in these AI’s because, you know, they’re essentially seeing themselves but they think they’re seeing something there. They’re failing the mirror test, essentially, I think its interesting.  Why is that happening? I think the example I gave to you is if you ever see if your audience has ever seen a piano where you have like a mannequin or something that’s playing it, and they’re playing like Mozart, I mean, no one would mistake that for it being sentient, right? Somehow you would know that, well, that’s just a very clever automaton. This is exactly the same thing. These AIs are just a very clever automaton if you like. program that we’ve created. There’s absolutely no real difference to that. But people are being fooled, I think because it’s language somehow. I don’t really know why that would be. Maybe just because it’s new, is something people haven’t seen, so that the novelty of that is making them think there’s something essentially there. And the other way that I explained to you is like, the way that all electronics works is people have heard this very semiconductors, right? What is a semiconductor was essentially, sand or silicon isn’t what we call in chemistry and inorganic compound, right? Or a rock as a joke. You know, computers are very clever rocks. This essentially the way we pass electricity through them, but we make them do clever things. These AIs are just made from the same thing. So how can they sound suddenly have magical sentience? You know, unless there’s some sort of sorcery involved magical spells to get? I mean, where would it be coming from, right?

 

Will Dove  06:04

So, I think maybe an illustration we could give that people would understand. And let’s for a second refer to fiction, the Terminator movies. Where forget about Skynet, let’s just look at the Terminator itself, played by Arnold Schwareznegger. As the movies develop, we discovered that this machine, this robot, is actually self aware to the point where it can even develop a sense of feelings. That’s a sentience. What we’re talking about with AI is more like a Roomba or a robotic lawnmower. It’s got a code that’s been entered into it that allows us to make certain limited decisions, but only within the scope of what it has been programmed to deal with. And I think maybe that’s a better illustration of what AI is versus what a sentience is. Your thoughts on that?

 

Mark Davidson  06:51

Yeah, I mean, it depends how far you want to go into how these AIs actually work. I mean, they, a traditional program works by a series of instructions very much like a recipe and you have to follow the recipe in order. So we call this sequential logic, there’s a sequence or ordering to it. These AIs are sort of a next steps or evolution in programming in that there is a program called an algorithm that coders have to write that they call it Training. It trains the AI okay? So what it doesn’t — the first thing it doesn’t have is what we call autonomy. It can’t go away on its own and just long, right? There’s a computer program that a team of programmers are written, and then they need data.  So they go, for example, in this case, they take all the data in the internet and pass it through this algorithm. So really, they say, it’s long learning, we get into semantics, is that really learning? Or is that just kind of a program a new type of program? In my view, that’s a type of programming, you know. So even remotely considerate, sentient, whatever, it would have to have a level of autonomy. I mean, a baby, the example I give is a baby knows how to learn. It’s not like you sit there with a team of people teaching it how to learn, right? It just knows. So we don’t we don’t have that. So it’s essentially programming then it looks like the data is given is a current question that it’s not like there’s any Well, we would perceive it as thinking thinking being you know, sort of history or of your thoughts over the last week or month and then questioning that but it’s just a total it’s just a very clever pattern recognition tool.  Now how that works is through a neural network amd neural networks are quite complicated. To explain, they don’t work in the way that a normal computer program it’s not that’s not a program. Okay? So it’s much more flexible. Once it’s being taught through this training data, it can then have a level of autonomy. And so we call this statistical inference. It can infer things from patterns if you want, okay, so it doesn’t have to be taught once it’s been trained, it can infer other patterns, but there’s no sanctions, there’s no consciousness, there’s nothing there. It’s just a very clever rock. Right? That is doing this.

 

Will Dove  09:28

Right. And I think maybe another illustration that we can give that would make sense to people is let’s get back to the idea of that robotic lawnmower. The programmers can teach that lawnmower that if it runs over something that say jams the blades and causes the circuit to trip so that the operator has to reset it. Okay, it might run into a stick and let’s say that this robotic lawnmower has a camera on it. I don’t think they do but let’s just say for the sake of this, that they did, so it sees the stick and the first time it just rides over the stick and trips the breaker. Right Okay, now it learns that riding over a stick will shut it down. So the next time it encounters an obstacle, it might not even be a stick, maybe it’s a rock, but it’s still going to stop or it’s going to go around it, because that’s what it was programmed to do. But now I’m getting bad. The reason why I said this thing has a camera is for this illustration. They can’t teach it to read. It doesn’t matter if you can hold a book in front of it and point to the word you can’t teach it to. It’s not programmed for that. It’s just a machine.

 

Mark Davidson  10:26

Now, that’s a very good point. Yeah, it’s kind of like the the original Human versus Deep Blue chess matches, which people may know about. And they said, Oh, you know, it was an AI. And it worked. But what they didn’t say was that the deep blue had been trained by like grandmasters who played it every single game. So it’s a what it has is what we call specificity to a specific task, in this case, these language API’s are specifically through these algorithms taught all that language, it’s not like they can then go and learn anything else. That’s it, that they’re not only limited to language, but they’re limited to the training data they had. We need to talk about that about bias, you know, people, my thing in conversations that that is kind of creating these answers is not creating anything is purely based on the original data that it had, it can’t create an answer that wasn’t in that data. So that data is biased, then it can give you by answers,

 

Will Dove  11:35

Right? So now that we’ve established that an AI is just a complicated program, it’s not a sentient, it can’t learn anything else. Programming, right? It doesn’t have any consciousness, it would fail the mirror test, it might pass a Turing test, but I have a problem with the Turing test, because a lot depends upon how intelligent is the human being in the Turing test.

 

Mark Davidson  11:55

Yeah, that’s right. That Turing test is, we’ll say was invented before digital actualized. Like we didn’t really understand what would be a good test. And it, right now, no scientists can really come up with a really good test for sentients, because we’d have to know exactly what sentients was. And then we get into sort of metaphysical discussions, I think I gave you in my notes that we’d have to get in some quantum physics, you know, we can do that. But there isn’t a simple answer to that.

 

Will Dove  12:25

I think we have to digress for just a minute explain the Turing test, because you and I both referred to. So I’m gonna give a very quick illustration of this. So the Turing test, you’ve got a human operator sitting at a console, and it’s interacting with two other individuals, which it cannot see. One of them is human. And one of them is a computer, say chatGPT. Sort of chatGPT Yeah, yeah. Okay. And the Turing test says that if, after a certain amount of time, the person who is sitting there the human being can’t tell the difference between those two, can’t tell you which one was human, which one was a machine, that it has passed the Turing test, but that doesn’t mean anything. Because first of all, you’ve got the variable, how intelligent is the human being sitting there, if it’s a six year old child, it’s not going to take much to fool it. Even if you’re talking about an intelligent adult, once that chat system gets complicated enough, sophisticated enough, given the range of things that a person could talk about, they might still be fooled. It’s still it’s not an it’s not a sentient, it’s just a program.

 

Mark Davidson  13:30

Yeah. And, and also, we know the conversations I’ve had with it. I can only illustrate through a question like if you said to it, what’s your favorite color? It’s gonna say, I don’t have a favorite color, because I don’t feel anything. Well does that fail the Turing test? I mean, what exactly is this conversation you’re going to have? So I mean, do you only limit yourself to questions that don’t reveal the true nature of what this thing is so. And really, they say that test goes back to a period before digital electronics, so we didn’t know about something called emulation. So emulation is where you can make a computer appear to be something they say, in this case, human language, by the cause underneath it inside the black box. It’s not working at all, anything like we work. We just make it appear to be so like my example of, of something playing the piano or an automaton is really the same thing. It’s all emulation. It doesn’t work exactly. The neural network works a lot like our minds, but the rest of it is not.

 

Will Dove  14:41

Right. And I think another thing we can say about the Turing test is just because you’ve got a machine that can emulate conversation well enough to fool the human being. Well, that’s only one aspect of human intelligence and other aspects to it, that it doesn’t have. So now that we’ve determined what AI is and what it isn’t Let’s get into what they can do with it. Because my viewers, you were talking about this because the globalists want to use AI to control us, to monitor us, to watch everything we do, to restrict our movements, restrict what we can buy, and even at a time, they would love to be able to control what we think. So let’s start with what they can do with it now, and then we’ll get into where they might be able to go with it

 

Mark Davidson  15:26

in time. Yeah, well, I’d like to be able to say that say future tense, but I would say that has been AI is being used for last 20 years, just the same was explained to people that public domain science and knowledge is 10 to 20 years behind what is in like, for example, DARPA or military, black site like with COVID, how we find out these labs are working on viruses and using, you know, gene splicing, and using technologies, the head is the same with this. So we don’t really exactly know what has already been used, but it will be used by, for example, intelligence agencies. So I have heard, and I’m pretty sure you have heard that they’ve been collecting everybody’s emails, online searches, phone calls. And what they can do is they can use these AI’s to scan very efficiently, because a person doing that is very inefficient, right, like traditional intelligence.  So the first thing they’re really good at is intelligence gathering, looking for patterns. Now, they would say they were looking for terrorists, right. So looking at chatter, they would call it and for connections between people to sort of so this I think I mentioned to you one of the things that we talk about his world is called Predictive programming. So that’s a word that’s used to mean something else as well. But in this sense, it means to predict future behavior. So this is what intelligence agencies wanted it for originally, is they want to scan everything that everyone’s doing. Because we can’t be trusted, right? The people, there’s always a double edged sword here causes a legitimate use for that to find terrorists, but are they really going to live it themselves? So that and who defines what a terrorist is? If there’s, you know, it starts out as bombers and whatever. And before, you know, it’s people like you who are questioning the narrative, right.

 

Will Dove  17:31

And I think another illustration we can give with this is that it wouldn’t have been difficult for an AI, for example, to freeze the bank accounts of everybody who donated to the Freedom Convoy, easy, but what you’re talking about here is predictive programming, where it might freeze the bank account. So people who would think would donate to something like that before they can do it.

 

Mark Davidson  17:52

so they can look through all your online activity. And essentially, like, people have heard of profiling. So we can profile people from patterns and predict who is going to be the troublemakers or who’s going to question this narrative. And then take action preemptively. Yeah, I mean, this is the trouble with old technology, you go back to Oppenheimer, the famous quote, right about, you know, I have become destroyer of worlds, like, I mean, nuclear power could solve the world’s power problems, but it also can destroy the world, this is true of virtually every technology, right? You could use it for good or bad, and it comes down to the intent of the person who owns it. So then, then you have to get into intent, you know,

 

Will Dove  18:39

Right. So let’s take a for instance scenario, a potential scenario, where we know that I believe that certain states are actually they can make the entire country that Biden has said that by a certain year, they have to put kill switches into all cars. Okay. All right. So now, you know their purported purpose for this is law enforcement, or I have a car if a car has been stolen, they can send a code that will kill the car’s engine. Well, we know what it’s really for. So here’s a for instance, let’s say we haven’t yet reached the point where they prevented us from going anywhere. But predictive technology within AI can monitor what you’re doing at home, how you’re preparing what time of day, it is all based on past patterns, and it can tell the difference between you’re getting in your car to go to the grocery store, or you’re getting into your car to go to a freedom rally. And if you’re getting in your car to go to a freedom rally, it’s not going to start.

 

Mark Davidson  19:32

Right. Now I’m afraid that you’re right. Well, I mean, this is this is unless limits. You know, I refer to this a long time ago saying that we need a global Constitution for the individual that you know, where are we put down in writing what our rights are going to be in this world we’re moving to because the thing is, as someone who spent their life in technology and science, and then it’s not I’m not trying to offend people I’m fully aware that most people really don’t understand how far science has gone and how quickly it’s moving. We are going to be in, people think the internet transformed, things well just wait like that the transformations that are coming in the next five to 10 years are even better. And the air I don’t feel the average person can keep up.  So they’re not aware that we would need protections written down somehow. So let your politicians then that they don’t have backgrounds in technology, none of the people that could protect us are probably aware of, of what you know, is going on in technology companies I call this technocracy, this alliance used to be the military industrial complex now it’s kind of the military intelligence technocracy, who are working in these labs, and they’re creating their vision of a world. And we the people – and our protectors, these politicians have no awareness of where this is going. Right? It’s, it’s pretty terrifying, to be honest with you, if it’s unleashed, without any protections, it – we’ve all seen these dystopian movies, but you’ll be living in that we already honestly, are living in that it’s just most people don’t realize it.

 

Will Dove  21:16

And do you have some thoughts on what those protections might be? Um, and where we’re going, I think, at this point in time in the interview is, what do we do about this? How do we stop this before it gets to the point where yeah, I get into my car to go somewhere that the algorithm doesn’t approve of, and my car won’t start.

 

Mark Davidson  21:32

And I was thinking about this before I talked to you today. And I had a horrible thought, you know, because it’s actually worse than this, which is, these AI’s are quickly going to be able to fabricate. Like, let’s say I, that, you know, you come up on the radar of this thing. And it needs an excuse to arrest you or remove you out of the way it can fabricate evidence, so so seamlessly that nobody could tell the difference between reality like construct and photographic evidence –

 

Will Dove  22:09

Videos, anything, it could create some of the high tech software that’s out there that the average person can’t access, but the military certainly has. Yeah. And the government’s have is you can create it, they could they could put my face on to anything. And it would be completely believable, photorealistic they can put words in my mouth. And if you doubt that, there’s a software system that we use sometimes here, it’s called Descript. And one of the things that Descript does is it takes your video creates a transcript and doesn’t Okay, job of that gets a few things wrong. But here’s something you can do with Descript that’s a little terrifying. Let’s say I said something or my guest said something. And then they came back into this later and said, I didn’t mean to say this, I didn’t want to use this word. I wanted to use this word, we can go in and we can change the word in the transcript. Right? And it will see the change the video to match.

 

Mark Davidson  22:59

Yeah, yeah, so a concept in all of this, and I don’t want to go too deep, but it is what we call granularity. So let’s say right now, you would think an expert can kind of look at a photograph. And if they look small enough, so the granularity is how small you’re looking, if you look small enough, you can seen small imperfections or changes that imply that it was it was, you know, photoshopped people would say, right, it’s probably not Photoshop or more sophisticated software. But you wouldn’t be able to use that technique with an AI correct, because it will be down to the smallest priority of a pixel. So there would be no difference, there wouldn’t be any trace, that this was fabricated. So I mean, this is a truly horrifying, terrifying idea.

 

Will Dove  23:48

And of course, the AI can go way beyond that. It wouldn’t just you know, manufacture, say a video of me committing a crime. It could then put posts online. Yes. An account of me saying inflammatory things or, you know, threatening somebody. Yeah, absolutely. It could create false records of my cell phone tracking of where I’ve been, and for how long? It can it can basically perfectly frame someone.

 

Mark Davidson  24:15

And it may already be capable. We just don’t I don’t it’s very, you know, it’s not clear. But with if it’s not already, it’s very soon capable of this. It’s just computing power. Right. And it’s just and it’s again, it’s intense motivation. Right. Yeah. So, and unfortunately, in this world, you know, we’ll get on to talk about but that there are people with this intent, you know, control. So what can we do about it? I think it starts with awareness. It starts with us having this conversation and trying to make enough people aware that you can’t you just be asleep at the wheel, this AI transformation that this is something that’s going to affect everybody. I mean, but in terms of I think we have to have laws we have to have that Have some kind of safeguard against this? I haven’t thought about what that would be has another discussion about, if they’re not written down, you know that they won’t be implement, like, if we don’t have actual laws, we’ll have no protection. Right. It’s just someone’s Well, I think, you know, I think, yeah, that’s a good idea. And they never put a law —

 

Will Dove  25:23

— And I hope is well as you do that they’re going to come eventually. But in the meantime, those of us who understand this, we have to take steps to protect ourselves because we’re primary targets. Yeah, you we’re the people who were fighting the narrative, we’re the people that they want to shut up. And so they will come after people like us. And I think there are some practical things that we can do. The first thing that I would say to people is, if you’re going out, don’t take that along. You need it, you probably know where you’re going. There’s no reason to take a tracking device with you that can record your voice and your movements. Leave that at home. And if you’re at home, and you’re not using it, put it in a Faraday bag.

 

Mark Davidson  26:04

Yeah, I mean, it sounds and again, like this stuff can begin to sound like tinfoil hat conspiracy, but it is true. That, for example, your microphone on your computers is always on even if you have like there’s a certain level of current HSD running through. Yes, you know.

 

Will Dove  26:24

And really, the only practical way to protect yourself from that monitoring is a disconnect your computer from the internet be turned off and say, remove the battery?

 

Mark Davidson  26:34

Yeah, I mean, it gets it gets difficult because we live in this technological world. So it seems like a lot of work. But you know that that is probably your only protection, unfortunately, and being careful when and where you see things thinking about this issue. But I just think, you know, I think the speed at which these technologies, it’s not just AI either, but a lot of technology moving is too fast for the human mind to kind of grapple with and think this is what’s happening. I think these the military in the technology companies and these other power elites we talk about are taking advantage of that. They know that most people are sort of only just becoming aware. And the thing is interesting is they seem to be trying to frighten people a little bit. I don’t really know what their agenda was there. Why, why they’re doing that. But because I feel like they’re going to try and make AI seem like everybody’s best friend is really well, of course, with this.

 

Will Dove  27:35

But, you know, just last week in my news show I reported on AI’s in I don’t want to use the name of the state because I might get it wrong. But there’s a state in the US where AIs are now denying certain elderly people health benefits, because they determined that they only needed X amount of time on these benefits. And when the time ran out, they cut it off.

 

Mark Davidson  28:00

I just correct something you say there. Yeah, because it’s important. They say the AI decided – I say the data or they train the AI and the algorithm does. So it’s still people. Yeah, it’s still people, they’re just creating a level of separation. So they don’t have to morally and ethically take responsibility for their own decisions.

 

Will Dove  28:20

And thank you for making that point, Mark, it’s extremely important to understand. And that’s where my own theory, oh, a pet theory is that we might actually be better off if we could create an artificial sentience, because it might actually grow a conscience. Yeah, hey, I will not – it will mindlessly execute its programming with no consideration whatsoever for damage that’s being done to people affected by it.

 

Mark Davidson  28:46

Yeah, I mean, it’s, that’s a deep rabbit hole, it’s really ultimately impossible to create an unbiased AI because your data set has to be completely unbiased. And how would you create a completely unbiased data set? Alright, so this is where it’s been no individual is unbiased. And the same way if you didn’t, even if you did create a sentient AI, it would have its own point of view, and whatever. Yeah, but I agree with you. Ultimately, there you ever seen the TV series Travelers, and then that kind of had the idea in the future? An AI was created as a very interesting TV series as an interesting idea. But there are issues with that too. I mean, ultimately, I do not believe that AI can ever have consciousness or sentience in terms of human consciousness, essentially, you can emulate it. You know, it’s scary enough, but we’ve already talked about the speed we’re moving. I would say within 30-40 years, you’re going to have emulation of feelings of emulation of everything. And most people are going to be fooled. It’s like the movie Blade Runner. Right? You’re gonna, it’s so I feel that the power elites are going to are going to deliberately fool people with that they they’re going to encourage that idea that it’s just the same as us. It’s just the same as you look, see, it feels it. It’s creative, but it won’t actually feel it’s just emulating. Okay. And that’s a very important distinction. That if you think about it, I don’t see anything. My kids have just sort of gone through their college age. I don’t think there was anything in the entire education curriculum on this to prepare people for this technology or any to think about these issues. Nothing. Right? Yeah, right.  Right. So you think about it.

 

Will Dove  30:46

Now. But before we we end this interview, you said something about 10 minutes ago that I want to pursue, it wasn’t something we really planned to talk about. But I think we should follow up on it. Because you said that there are technologies coming that are even more frightening?

 

Mark Davidson  31:02

Well, I think the most frightening it’s not related to AI as such directly is nanotechnology. And people sort of heard this referenced in sci fi, I, when I was doing my degree, I wrote a paper on nanotechnology. And I remember thinking, well, this is this is insane. Like, we just can’t do this, because So nanotechnology, people aren’t where it is. Imagine taking a computer or robot and shrinking it down so small that it’s down to we the call it a quantum level. So essentially, you know, from your chemistry, you have atoms, and you have different elements, this robot will be able to manipulate at that level. So it can change the fabric of reality. You could, you could tell this, this now, technology, I want to transform this glass into a different object, or it’s like all coming, I could turn, you know, copper into gold. So it sounds fantastic. You know, the things you could do, of course, is the holy grail of science is what it is. It can also destroy, like a computer virus, right? That’s how people like you are where you could get a computer virus, these things would get a virus. The first year your computer has to get infected, then somebody writes a program to de infect it. Well, you wouldn’t get that chance with a with a nano virus, it would destroy the planet before you even got a chance to correct it.

 

Will Dove  32:34

I mean, it could be targeted. It could be genetically targeted.

 

Mark Davidson  32:38

It’s really good. So the Oppenheimer thing is like, at what point. Do human beings realize that we’re not responsible enough to create a technology like that is absolutely patently that we are not ready to have a technology that we’re probably not ready for AI for the same reasons that we’ve, we’ve just talked about. I mean, I say this as someone that say, love science. And so it’s I realized that we, we really should have stopped pushing forward until we sort of improve ourselves really. And this is I talked about with you before is known in science is called the filter. The filter is a known thing and sciences just like Ascension civilization is very likely to destroy itself. Once it can create technologies like nanotechnology, AI, and the other one we could talk about is which is genetic manipulation is trying to alter our genes, but not just genes, right? It’s we talked about epigenetics, it’s the nucleotide sequences, let’s say we want to make ourselves smarter. So we start manipulating that. Right. I mean, these technologies are to me absolutely insane, like, by the, you know, that being worked on in these labs, and that we have no oversight over them.

 

Will Dove  33:59

Now, for for anybody who thinks nanotechnology is science fiction. It’s not it’s actually it’s been around for at least 20 years, the US military developed smart dust. This is it’s literally the size of a grain of sand. And they sprinkled this the, and this test was done folks back in 2001. Okay, they sprinkled this smart dust and a bunch of military vehicles 176 and successfully track their speed and direction from a computer, just because it’s now it gets worse. They’ve also got something called neural dust, which is even smaller, which you can pass to somebody else with a handshake. It’s so small, it can cross the blood brain barrier. And here’s the really scary thing about it. Smart dust is controlled by RFID frequencies radio waves. So for example, a signal from a cell phone tower. Yeah, but you can’t get that inside of a human body or if you can, you can’t do anything useful with it

 

Mark Davidson  34:53

Not unless you put it in a mass vaccine program. Will,

 

Will Dove  34:55

True, true, but here’s the problem. RF frequencies don’t penetrate the human body, because they don’t pass well through water. But what does pass well through water is ultrasound. Medicine has been using it for years. And ultrasound is exactly what is used to control neural dust. So let’s say that they infect us all with neural dust. And then let’s get back to the 15 minute cities. There’s dog collars that you can buy that have a transmitter. And if your dog goes more than a certain distance from that transmitter, it gets a shock. Okay, so what happens when they take this neural dust and infect us all with it, they say, Well, this is this is your area, you can’t go more than 15 minutes from here. And if you try to, you’ll start to experience pain. And it will get worse and worse, the farther you go from that point.

 

Mark Davidson  35:43

But I think you know, and I had this discussion before about this, my big concern about the shots is how, you know, we’re into a trust issue here. Your – there’s no way like, for example, the smart dust or any nanotech, how am I supposed to verify what is in that shot at that level? You know, even under an electron microscope, which most people don’t have access to, or have sophisticated lab, it would be very difficult to know what’s in there. I’m absolutely certain they do have this, there’s tech DARPA  a lot of money into you know, I mean, it’s the Holy Grail. It’s almost like a god for science to be able to create these technologies.  They say morally afterwards, like open up or what have I done, but that doesn’t stop them. They still went ahead. I mean, Oppenheimer said that there was a chance that the first test of the Manhattan Project could ignite the US atmosphere and destroy the planet, but they still did it. Yes. Because the desire to push the boundary and get answers is greater than the fear of the consequences. And then we’re left to deal with the consequences. And this is where we’re at. We’re finally at that boundary of this filter boundary, where we’re finally at this point where this isn’t science fiction, it’s science fact. And the most alarming thing to me is that there is absolutely no oversight because people have no awareness that this is tech, this tech is at that point, right? It’s like, well, we only know about the manipulation of viruses, because maybe one got out in the wild, but what else did they have? What else have they done with these viruses? Right? I mean, we, how do we control any of this? Until it all blows up? Right?

 

Will Dove  37:29

And that’s, that’s a very good question to ask. Because as far as I can see, knowledge really is the only hope we have is if we under the better we understand it is.

 

Mark Davidson  37:38

The knowledge and awareness. So I tell people told my whole life is like, well, I get that you’re not interested in science, I get that this stuff is complicated. But if you don’t think about this, it puts it puts all of us at risk, right.

 

Will Dove  37:55

And I think we can give people some comfort in that you don’t have to go to university and get a degree in something you what you don’t even have understand at a deep level how it works, you have to understand what it can do. Yes, you don’t need to know, for example, how a cell phone tracks you, you just have to know that it can, right, you don’t have to know how they can turn on your microphone, even though you’re not using your phone, you just have to know that they can. And once you to know these things, now you’re equipped to make your own decisions about how far you want to go to protect yourself.

 

Mark Davidson  38:22

But do you notice how all over the internet, the media, you don’t? There is the ultra education like I said there is obvioulsy to me an agenda here to not raise these points and say, well, we should be learning this right. I mean, it’s it’s —

 

Will Dove  38:40

— yeah, and the other. The other flip side of that, though, is I do think we need to grossly overhaul our educational system, right, we need to be teaching the sciences much more strongly than we do. So that we’ve got kids coming out of high school —

 

Mark Davidson  38:52

— and these ethical issues, like why and then why is none of this, but I guarantee that there’ll be a push back to not be, I feel this is a big part of it. They don’t want this level of wellness and people they’re quite happy for people to be having their so called smartphone, which is really a device, a device for them to download this information instantly to people, right.?

 

Will Dove  39:20

Yes. And by the way, for anybody who’s watching who’s wondering why I’m doing an interview waving my cell phone around. This particular phone is deactivated and the battery’s dead. So they’re not listening to me through this one. Now, that doesn’t mean I’ve been monitoring my Zoom interviews, but you know, I do take certain steps when I go out. I don’t take my phone with me.

 

Mark Davidson  39:42

Yeah, and there’s a fine line like you know, a lot of see a lot of people online, you know, we don’t want to become so paranoid that it’s just so impacting your life, but it’s, it is just an education knowledge issue and awareness issue right. With moving to this ridiculously technological world with most people really just blissfully unaware of how everything is working, and I just don’t think that’s tenable. Like, it just opens us up to total mass control. And, you know, I get that people don’t want to think the worst of people in power but, just look at all of history, I just don’t think you can be that trusting that people will use this that map much power to give people that much power, that they’ll use it for the right reasons. And unfortunately, they’ve shown that they wont, right?

 

Will Dove  40:32

Right, and it’s further to what the point you just made we were discussing before the interview that I recently read Canada’s COVID by Professor Barry Cooper, and in the conversation I had with him and that interview will be out shortly. In fact probably before this one, right, he made a really good point is that the concept of the philosopher king is false. Because the sorts of people who want power, and the sorts of people who do what you and I are doing, sit back and analyze what’s wrong with our society, how we fix it. They’re not the same types of people. You and I don’t want to run for office, we don’t want power. But the people who do they do not, in most cases have altruistic motives for it.

 

Mark Davidson  41:11

Now, I did get this but I mean, the thing that realization I had 35 years ago, and I was studying, you know, very advanced, like quantum physics and science was, I could see ahead and think well, there’s going to come a point when they finally have these technologies, so gene manipulation, nanotech, AI, and these people working on this, you know, we have to get into the mentality of science people not to be offensive, but all of them are very, we could say, sociopathic, they don’t, you know, the sort of Bill Gates is a good example of what I’m talking about, if you want to take that. I feel that I was saying to you for the interview that they are very nihilistic, they don’t really see things the same way that you and I would.  I mean, I’m not particularly religious. But I find myself more and more aligning religious people over this issue that I don’t believe that we are just random goop and nothingness, but that’s where they’re coming from. And when you start from that, and then you’re creating all these technologies, they don’t have the reservations that a moral ethical person would have. That’s what frightens me more than anything else. And meeting some of these people and seeing that they would like nothing more, quite honestly, to, you know, try to get rid of human beings and put us all into this Metaverse and just say, you know, we’re just technology. We’re just information. That’s what they would like. I don’t think that’s a wall that most of us if we really think about it, that’s not a wall. I would want my kids. So we don’t stop it. That’s right.  That’s what they’re going to push for. Absolutely can see it already everywhere. Right? This is the agenda. Transhumanism? The Metaverse Technology is wonderful. It’s integrated, just biological is just a different type of technology. But it’s not true, we didn’t get into this. But you know, consciousness, the observer effect appears to be something special, the only biological life has and they don’t they don’t want to talk about that. All right, they just want to perceive us to be a technology. So that’s a debate and argument, if you like those societies going to have to have we’re going to have to finally answer the really big question. You know, what are we? Why are we here and science that they are not going to want to have that debate? They think they’ve already won it, and they’ve pushed spirituality and religions out of the way. Right. Do that? I don’t agree. I don’t agree.

 

Will Dove  43:46

And neither do I. We are working hard to bring people the truth. And you can help just by sharing this interview on your own social media accounts. The more people who know the truth, the less power to sow fear and dissent the globalist will have. Please take just one minute to share this interview. If you don’t have a social media channel or account to share it to, then please copy the URL above and send it to anyone you know, who might be receptive to this message. Fear is the only real weapon the globalist have. But the antidote to fear is not courage, but truth. Because people fear what they don’t understand. Please share this interview and help us to spread the truth.

Want Your Country Back?

We are in desperate need of monthly recurring donations so we can hire assistants to create more tools in a timely manner. Donate below!

Can You Donate Monthly?

Please consider making your donation monthly. This allows us to make commitments to produce tools and content we otherwise cannot.