Is Linux the only platform left to escape AI?

Artificial Intelligence, or AI, has been occupying our minds for decades, or even hundreds or thousands of years if we look as far as at Greek mythology and medieval legends. If we focus on AI primarily from the perspective of thinking machines, then the origins will probably lie in the 1940s. But if we look at the real momentum that AI has entered and the visible impact that AI is suddenly starting to have around us, we don’t have to go back more than a few years. And right now, AI seems to be the talk of the town. More and more AI-based solutions seem to follow one another, with ever greater promises and apparent benefits for us as humans. Web browsers are getting built-in AI functionality, Search Engines are being built on an AI foundation, different software applications are getting AI support, and even Operating Systems are getting built-in AI technology. And all of this should help us to…well, to what actually? There seems to be a growing feeling that we will soon no longer be able to ignore AI, but what if you do not yet see only positives in AI, are wary of what AI will mean for the world, are thinking about the environmental impact of AI, or simply do not yet want AI being integrated into your computer use, is there still an option available to use your computer the old fashioned way? In this article, we will look at Linux as a platform to escape the AI race for a while.


  1. Introduction
  2. AI requires care and caution
  3. The impact of AI on available platforms
  4. Final words


There seems to be no article, blog post, or YouTube video published lately that does not mention something about AI. There are many use cases described where AI can greatly improve and enrich our lives. Productivity experts and enthusiasts present us with the most fantastic possibilities that AI already has to offer, even though it is still in its infancy. The possibilities are already endless. We were recently able to generate photo-realistic images based on a number of commands. We can have entire articles written for us by only offering a number of thoughts to the AI engine. Recently, we have even had very realistic short films generated based on a number of creative statements. From a productivity perspective, we are tempted by the possibility of having a received email summarized for us. On the other hand, we are also offered the possibility of having an email generated for us based on a number of cleverly described commands. But what are we actually doing? Has the recipient of an email not earned the respect to read a text with ideas that have actually been composed for you by hand by the sender? And does a writer of an email not deserve the respect that his or her content is actually read with attention and that the important nuances are not missed? Do we really think it’s okay that we no longer give each other sincere attention for what we have to say to each other and what we have produced for each other? There are a lot of questions that concern me about AI. I am a person who usually finds it very important to look at and interpret everything with an objective view, as far as that is possible for a human being. I am therefore not someone who has yet formed a conclusive negative or positive opinion about the use and possibilities of AI. I do see advantages for us as humanity, but I also definitely see dangers and problems on an ethical level at the moment. AI is already being offered to us as if it were a fully-fledged end product, but there are a lot of conceivable situations and outcomes of AI that are currently downright scary, discriminatory, and sometimes even life-threatening. So what should you do as a computer and software user who is still a bit skeptical about AI and wants to calmly wait for developments without having to come into contact with it forcibly or having to use it unintentionally, for example in your operating system? Linux is a very nice platform for that, which still gives you a platform and the possibility to really be in control of what you want to do with your computer and not be forced into a way of working. With Linux, we are still talking about real personal computing and everything you do there is still really personal.

AI requires care and caution

I already mentioned some concerns about AI in the previous paragraph, but let’s go into it a little deeper. When it comes to AI, I see issues that concern me and others and that may also concern you as a technology enthusiast. With the below, I only want to raise some issues that seem overlooked by others, perhaps pushed aside, or simply not considered important enough. 

Incorrect results

AI can only exist if there is input data available that can be used for the required learning process. However, some of the data used by machine learning systems can simply be incorrect. We all know that the internet can be a really great place, but that there is also a lot of incorrect information and incomplete information available out there. These learning models are not (yet) able to see the difference between correct and incorrect, and between complete and incomplete. It only uses the data and tries to come up with the most logical answer to the question asked. The answer you get from an AI system can therefore simply be completely wrong. When you use AI to, for example, get inspiration for an article, you should always do fact-checking to make sure that what you use is complete and correct.

Life-impacting and life-threatening outcomes

We can take the above a step further by looking at the dangers of incorrect information. I was recently genuinely shocked by some incorrect responses that AI engines can sometimes come up with. Questions about what to do concerning your health can lead to life-threatening answers if you simply accept these answers as the truth and follow them. Questions about ingredients can lead to life-threatening answers if you simply accept these answers as the truth and follow them. What if AI learning models also see satire-based information as the truth? Not all information on the internet is factually correct, consciously or unconsciously, so AI models will be fed with incorrect and harmful data. How do we determine together what is safe data for feeding answers to medical questions, legal questions, etc?

Non existing results

In the previous point, we discussed the danger of incomplete or incorrect information. But in the past few years that AI has been on the rise, some example results have emerged where the AI ​​software comes up with results that simply cannot exist. The learning system underlying AI uses available data. If that data, for example about sports statistics, is only available up to a certain point in time, and you ask a question about a moment later in time, then the AI ​​should simply indicate that it does not have the answer available. But apparently AI tries its utmost to come up with answers anyway, where the result contains scores that did not exist, sports teams that had players who never played for them, and so on. AI is therefore able to come up with non-information because it refuses to admit that it does not know the answer.

Missed essence

I mentioned it above, but more and more productive solutions are becoming available, such as in office applications, where it is possible to generate texts or emails on the one hand and to have texts and emails summarized on the other. If we take this to the extreme, the consequence is that on the one hand, for example, people in an organization send AI-generated and therefore not self-written content to colleagues, and then those colleagues have this received generated content summarized by AI again. For me, a major problem is the loss of the real essence of the message that should have been communicated between two parties.

Bias and ethics

Most of us are probably already aware of the existence of and the potential problems surrounding confirmation bias. We all deal with confirmation bias consciously or unconsciously. For example, I love technology and everything that has to do with it. So I move around in groups of people, in internet forums, on websites and blogs, and on YouTube, where I find the most connection with my personal interests and preferences. For me it is technology, for another it is politics. The information bubble in which we often voluntarily immerse ourselves has the negative effect that our ideas are often confirmed as long as we continue to wander around in these echo chambers. However, to have a more objective worldview, view of life, or view on whatever topic, you also need to be exposed to insights and opinions outside the frameworks that you are familiar with or where your preferences are. Much the same applies to AI. The engines behind AI are fed with data. If this data is too one-sided, then the answers to questions will also have the danger of being one-sided.

“machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group”


More and more use cases for AI are being designed or already developed. And in theory, if AI and the underlying models were conducted and worked optimally, objectively, without bias, there are very nice and practical possibilities for which AI could be used. But would it be possible to establish a fairer judicial system with a more objective assessment of all available factors? According to an article by UNESCO, there are many challenges to be addressed and investigated before you can seriously implement these types of applications.

“So, would you want to be judged by a robot in a court of law? Would you, even if we are not sure how it reaches its conclusions?”


The problem with the current level of AI available to us is that the learning level is still in its infancy and for some topics, the answers are therefore also still at a child level, with all the biases and incompleteness of knowledge that comes with being a child. Using results from a child level for adult decisions demands care and caution.

Usage of creative work without approval

A big problem of AI is ignoring copyright statements or assuming that you are allowed to feed your machine learning engines with work from creators. When I look at my own situation, in which I put a lot of time and effort into creating hopefully reasonably ok articles that can be of added value to some of you, you as a reader come to my website to read these articles. However, now with AI my articles are seen as free learning material for machine learning engines, and my articles are used as source material without being asked. But it goes even further, because if it were up to Google, search queries would no longer yield pages with possible answers, including possibly my website, but also those of other content creators, but Google would come up with a pre-formed answer based on various sources. The danger is that I create content, but that soon no one will ever come to my website to read the actual article. However, counterarguments are already being made that if a person is inspired by articles from other bloggers for their article, that is also possible without appreciating the other authors for it, as long as you do not copy anything exactly and pass it off as your content. And of course, we all get inspiration from available material, but the difference is that there is normally a real person behind it who creates something original. AI, however, composes an answer or result from a large number of already existing components, phrases, etc. If we reason this further, we can say that a human can also create something without available inspiration, but AI can do nothing at all without available data.

“We need to develop new frameworks to differentiate piracy and plagiarism from originality and creativity, and to recognize the value of human creative work in our interactions with AI.” 



In the previous topic, I discussed the usage of creative work without approval, but what about the usage of your personal data? To what extent do we find it okay to release all our personal information, such as our purchasing preferences, spending behavior, going out behavior, and internet behavior, to learning models for AI?

“AI systems pose many of the same privacy risks we’ve been facing during the past decades of internet commercialization and mostly unrestrained data collection. The difference is the scale: AI systems are so data-hungry and intransparent that we have even less control over what information about us is collected, what it is used for, and how we might correct or remove such personal information.”


To what extent are we still able to indicate that we do not want our personal data to be used to feed AI. And to what extent is this data interpreted correctly? And what is the scope of this data, for example through my relationships with friends and family who, via a network system, suddenly become part of a learning system that originally only concerned me, but soon affects everyone around me and of course vice versa.

Lack of respect

Above, I already mentioned missing the essence of using AI in your communication, for example for summarizing received messages. I mentioned that more and more productive solutions are able to generate texts or emails on the one hand and to have texts and emails summarized on the other. The consequence is that we no longer really write ourselves and that what we write is no longer really read. For me, there is a layer of disrespect for your fellow human being in that way of working and interacting

Focus on the result instead of the process

I read more and more articles and watch more and more YouTube videos of people using AI to make their productive and creative lives more efficient, more effective, faster, more optimized, and more profitable from a financial point of view. But everything seems to be focused on the result. But for me it’s not just about the result, it’s also about the process of getting to that result. For me personally, the process is even more important than the end result. I truly enjoy the process of preparing, researching, writing, editing, rewriting, and publishing articles. I enjoy the process of daydreaming and working out for future projects. Why do we want to outsource that to AI?

Impact on the environment

Our modern life seems to have a huge impact on the health of our natural environment. We throw a huge amount of electronic waste in landfills. These are often still perfectly usable devices but many of us think they are no longer modern enough and therefore candidates for replacement. And then came the cryptocurrencies with their energy-guzzling mining activities and the even further growing e-waste of quickly wearing out rigs with large amounts of graphics cards. And now we are entering the age of AI, where it seems that the technologies required to make this possible, require even more energy.

“since 2012, the amount of computing power required to train cutting-edge AI models has doubled every 3.4 months” 


The impact of AI on available platforms

I have expressed some concerns above about careless implementation on the one hand and the thoughtless use of AI on the other. That does not mean that I am against AI. I definitely see possibilities that can greatly improve our lives. I do have a problem with how AI is currently offered as if it is already a fully thought-out and tested end product. In my opinion, AI is a technology with potential, but at the moment far from finished and far from high-quality, far from reliable, and without clear rules and boundaries about implementation, supply, and use. I am absolutely convinced that this situation will continue to improve in the coming years, but for now we as normal users of technologies are just being used as beta testers, while it is sold as a proven end product and many people think that it is already an end product with correct results.

However, what we also see happening more and more is that AI is not only being integrated into certain applications, such as productivity tools, creative tools, and project management tools. Integrating AI into such applications is fine in itself since you as an end user can decide for yourself whether you want to use that application, or prefer to go for a comparable alternative without AI. But we are now also seeing AI increasingly becoming part of almost all mainstream web browsers and operating systems themselves are also not immune to the influence of AI. Microsoft is integrating Copilot into Windows and is being positioned as your AI-powered companion. New laptops are even coming out with dedicated keyboard buttons for AI under the name Copilot+ PCs. In addition, Microsoft is now also introducing Recall. Microsoft Recall frequently creates screenshots of your complete screen and offers a semantic search on all historically created screenshots. Even though Microsoft says that this data is only stored on the PC itself and not in their cloud, there is still a security risk if, for example, your laptop is stolen. The first thing a smart thief will do is look for the Recall function for possible passwords that were once on your screen, etc. Apple is also working on AI integration in macOS, IOS and iPadOS. Google is working on a complete redefinition of search results. Apparently, you can no longer avoid the presence of AI.

But what if you are not yet ready for AI in the applications and platforms you need? Where can you go then? Is there still a way to get out of AI as long as it doesn’t meet your own criteria? Well, what about Linux? Of course, there are already AI developments going on in the open-source world, but if we look at Linux as a foundation for various distributions, we see that there are a lot of AI-free options available for end users who don’t want to have anything to do with AI yet and want to use an operating system in which you are still really in control instead of your AI-driven copilot. Linux is one of the few options where personal computing is still really personal computing. So, have a look at Linux Mint, Zorin OS, Ubuntu, elementary OS, Kubuntu, or all the other available beautiful and nice Linux-based operating systems, and see if it is something for you to make a switch from macOS or Windows for your daily personal AI-free computing needs.

If you have the need for a nice and simple, but extensive introduction to Linux for your productive needs, have a look at my free “Linux Mint tutorial series“, or have a look at my 360-page “Linux for the rest of us” book in paperback or Kindle format. If you need some additional reasons that could help you in your decision process to switch to Linux, you can read my article “16 Reasons why you should switch to Linux“. 

Final words

In this article, I have raised some concerns about the current state of AI that I genuinely care about. Normally I don’t give my opinion or express my concerns as much as in this article. But when it comes to AI, I see issues that worry me and that may also concern you and others as a technology enthusiast. So, I tried to raise issues that seem overlooked by others, perhaps pushed aside, or simply not considered important enough. 

I hope you consider trying out or looking into Linux and Linux-based computing as an alternative to the abundance of AI that is already there or that is to be expected in other systems soon. If you like your computing to be and hopefully stay personal, Linux might be something for you.


Have a look at my latest book on Linux, in Paperback or Kindle format.

For more info on my book click here.

When you want to buy the book click on the image below.

If you appreciate what I do on this website…,

User Avatar

About John Been

Hi there! My name is John Been. At the moment I work as a senior solution engineer for a large financial institution, but in my free time, I am the owner of,, and author of my first book "Linux for the rest of us". I have a broad insight and user experience in everything related to information technology and I believe I can communicate about it with some fun and knowledge and skills.

View all posts by John Been →