The Miranda warning reminds me of my youth when I was watching american series on TV. There was always this scene where the bad guy was apprehended by the police. The Miranda warning being told to the vilain was the epilogue of the story. The episode was over and you were able return to your life being reinsured that law and order were restored. This trope was reinforced by the repititive structure of the series back in the day. Most of the series followed a very strict skeleton. I used to watch the clock and try to guess when were the plot twists going to occur. Series are much more creative and surprising today. Ironically it is the movie industry that has become very formated, but this is another story (yet a nice allegory of the whole essay). What I am tackling here is this idea that when you publish information (that is mostly on social media today) there is no Miranda warning. It should change.
Anything that you publish will be used against you on social media
The undiscussed point about Chat GPT and its friends is where do they get their learning material in the first place. Basically they crawled the web and put their digital hands on whatever has ever been published. So if you have a blog, like this one, if you wrote texts for social media, if you published an article etc. you can be part of the project. Your words have been used to train those AIs. Now consider the fact that Chat GPT is the fastest growing application in the history of humanity. In few weeks Chat GPT secured more than 100 million users. It is not impossible to reach a billion users in less than a year. I am just going to let your imagination work on that: how much money can be made out of such a growth is surely something never witnessed in the history of humanity. And it is just the start. But my angle is not about those commercial considerations. I am rather trying to imagine how it will change the virtual landscape. Lets examine some ideas.
Reality is what has no copy
Clément Rosset was a french philosopher (yes I know, it can sounds bleak, but he was not like the usual suspects). He worked on the issue of reality and the problem of the Oracle. In a nutshell the issue is: the Oracle makes predictions from which the hero tries to escape. But whatever the hero does he ends up realising the prediction. What is strange is that the hero thinks things could have been different. It turns out that the essence of reality is that there is no double of the reality. Technology started to mess up with this idea with phonographs and cameras. There was this idea that you could capture reality and put it in inside box. From this point of view GPT boxes are no different: they are abstract pictures of what has been published. They may appear as impressive or stupid (GPT can contradict itself within a sentence) but this is not their essence. They are in fact the zeitgeist carved in silicon. This observation can help to guess where things are heading to.
Predictions
The question of how is this going to change our world can be seen under a different light than the doomsday one. Everyone talks about machine learning, but from what is the machine learning ? It turns out that it is mostly from human wittiness. It means that if GPT has no longer access to fresh data it will enter in a kind of stationary state and will only be able to rehash old material over and over. So it will feed on what is published on the internet. Now what if most what is published is in fact generated by AIs? It is not difficult to imagine that most news articles could be written by machines: from factual sport reports to unsurprising press conferences of the White House there is not much that looks out of touch for the exising version of GPT. Nowadays erveyone is playing with GPT to produce material: texts, images, sounds, videos etc. The problem will be to find fresh and interesting inputs in order for the AI to grow. Where will those teras of data coming from? Social media look like the primary source of this youth elixir. The previous business model of the internet was based on recording your actions, aka surveillance capitalism. It was based on the analysis of metadata: who you communicate with, where do you click etc. But the market model is going to change. It will move to what you think: by collecting every piece of text, every image and video you publish to feed the GPT machine. The data, not only the meta-data, is going to be the gas making this new engine to work. It is going to be such an enormous market that there is no way that people are willing to give their thoughts for free. How it is going to unfold precisely has yet to be seen, but having an internet more closed than today looks like a possibility.
Another probable evolution is to start to make the difference between human and robot data consumption. As a human there is only so much data you can ingest in a given day. Therefore if there is a limit of 25 hours of video that can be downloaded by day, no human will be penalized (I know that young generation look/listen to record a 2x speed but you get the idea). Likewise a human can only read so many tweets in a day etc. Of course those limits are not limits for robots that can download data as fast as the internet allows them to. So maybe the addition of such kind of restrictions will make sense in order to make GPTs of the world have to pay to download data. Whether or not the author get paid back is another story yet to be discussed.
I am not really afraid of a Kurzweil like doomsday scenario in which AI will suddenly behave like a Golem 4.0. But it doesn’t mean either that I am underestimating the depth of the impact of those new technologies on society. Now you can’t say you weren’t warned.
Humans could also use AI to get that 25 hours of video digested into 25 minutes - it could potentially filter out the noise for us (or things it knows we already know and need not reminded).
Reading old novels becomes the new reading poetry - only for those who are not trying to capture lightning in a bottle (the latest Land Rover model or a flashy new job title).
And of the latter what worries me a little - there will be more polarization. People who opt out from the rat race versus those who think arbeit macht frei - I guess the IYI class will become even more fascist about their aspirations... You know, in the light of what industrialization did to us.
At one point these two classes can no longer mix because of the children - if your kid doesn't have the latest smartphone that has the capacity to run whatever app kids use to virtually hang out in, or an electric scooter etc (and that's all the domain of those who like to work and earn money and buy buy buy).
People might also, of course, see AI as the new religious entity to listen to - replacing doctors and other experts - without realizing how wrong and dangerous it could be (and not dangerous in a terminator machine uprising sense - but because humans' own obliviousness to how dumb we can become, running towards a cliff...).
Anyway!