Comment by Agnieszka Porębska – CEO of Talent Alpha, a tech hub which connects nearly 900 IT services organizations from 52 countries on one platform, enabling them to find & share tech talent on-demand.
The rapid development of artificial intelligence is one of the most important topics of our times. According to the Global Artificial Intelligence Study, the global adoption of AI may generate a $15.7 trillion increase in global GDP by 2030. It will be a significant boost to the economy. However, there is probably nothing out there that could so dramatically change our world with many risks attached. Consequently, there are many people inside and outside the tech sector who are extremely concerned about AI adoption. However, I believe they are looking at problems connected with artificial intelligence through the wrong lenses. AI is not good or bad (yet). There are still people behind it, and I believe we still have the power to influence them, as well as the entire AI revolution. We should look to be driving the AI revolution, and not be merely consumers of its fruits.
People behind AI
AI is a tool in the hands of people, and as far as we know, does not have the ability to act in an independent manner. Therefore, the discussion should center on people, their decisions and motivation. So, who is driving the change? Why does guiding the development of AI in a safe, and controllable direction appear to be so difficult? Are we driven by a determination to create a better future for the present and future generations, or are there other motives?
We should start by asking questions about the leaders of this change – the corporates, individuals, and governments. According to the Breugel think tank, in the last decade over 26% of all job vacancies that required AI skills were posted by the top 10 firms that employed people with AI skills. However, this concentration of commercial power is now being challenged. For example, at the beginning of March, Meta’s LLaMA (Large Language Model Meta AI) was leaked to the public. Many open-source solutions, which are competing against products from major players, are now stemming from the information that got leaked. However, history does show that big companies are increasingly powerful and influential, even in sectors where competition is fierce. One of the reasons for this is the limited access to data, which I will discuss later in the article.
So, we need a debate about who is and who should be in charge of the development of AI. Rather than focusing on “Will AI take my job”, let’s analyze other issues connected to this topic – “Who will control AI and profit from taking my job”, and “Is this inevitable?” And if yes, how can we be sure that profits from this process is not in the hands of a select group of people or companies, and what compensation could be involved for those who will be affected? Furthermore, what other opportunities could be open for people who lose their jobs due to this dramatic shift, and who would ensure that these openings are fairly distributed?
Means of control
We should use all the tools available to control the process of change and make conscious choices. This will be challenging as people have handed over a significant number of decisions and amount of knowledge to commercial interests as well as the perceived lack of time to act with regard to AI. People have also exchanged their rights and influence for seemingly free-of-charge solutions and the convenience which these engender. But we still have some say in this process. Where is AI on the agendas of political parties? How do politicians and public institutions plan to control or at least influence those businesses which are driving this change? How can we compete against global superpowers which play by a different set of rules? Perhaps we should be focusing more on these issues and less on 800+ alike short-term topics.
As The New York Times stated, to date, no bill has been introduced in the US to either control the potential dangers that AI represents, or to protect the country’s citizens. In addition, the EU faces huge delays in introducing an AI Act. In fact it’s China that is leading the way on AI regulation with Chinese officials already working on a second round of AI regulations which are focused on deep fakes. It does show the ability of China to set rules in this sphere, which is in stark contrast to the sluggish reaction to AI regulation in the West. However, I do not claim we should be in a hurry to implement a raft of laws. I rather think we need more public debate and interest in this topic to push legal adaptation forward, but in a very cautious way. History shows that regulations can sometimes exacerbate rather than solve problems. The best example is GDPR, whose implementation saw corporations increase their advantage over smaller players. This should be a sobering lesson for us all. In the same vein, we should be aware of the people behind AI, as well as people who would have an impact on AI-related legislation, and take into account all possible scenarios of implementation.
In addition, the cooperation of a number of public institutions will be needed. In its last report, the AI Now Institute highlighted a set of approaches that could reduce the influence of vested interests and big tech companies in the AI sector. The report concluded that the expected impact could be achieved only if correct policy is combined with the aligned activities of public organizations.
Last but not least, we should also consciously work on legal and economic solutions regarding an emerging new order that may appear due to the AI revolution. Labor law, social security, are only a few of the topics that should be on political and social agendas. A close eye should be kept on this process as different scenarios may well be tested in the court of public opinion.
Democratizing data
There is one more, very powerful way of ensuring that the market is more equal and inclusive, as well as allocating more power to small businesses and local communities – to more widely disseminate and share data. Talent Alpha cooperates with many small IT service firms who have said that in the field of AI and Machine Learning they could implement many more innovations and make a real difference in sectors such as biotech, healthcare, energy, and transportation. However, this would be dependent on data, which is required to train algorithms, being more accessible. Although there is a great deal of data collected by public institutions and major corporates that are not sensitive and could easily be shared, there are also large libraries of anonymized data to which only a few companies have access. Sharing all of this data would boost innovation, offer opportunities to smaller players and reduce the concentration of power in the AI sector.
In fact, LLMs are trained on data sets created with the usage of publicly available content. The Washington Post, together with the Allen Institute for AI, has analyzed the data sets utilized by Google’s T5 and Facebook’s LLaMA. This investigation revealed that data was derived from sites such as Wikipedia, Coursera, patent sites, news outlets, crowdsourcing sites, and many other sources. The creation of a large proportion of data is based on our activity in the network, search engines, and data we put into the cloud etc. We could in fact say that we have all somehow contributed to the development of AI models. The same thing could be said about AI models trained on our personal data which is shared with, for example, insurance companies, financial institutions, and travel agencies. So why do big organizations use our data but are not obligated to share which, to a certain extent, we have helped to develop? If data is gold – a commodity that is mined, processed and sold – why do big players not have to pay “data-taxes” by sharing part of this data or weights with others?
In May, Sam Altman, CEO of OpenAI and Wojciech Zaremba, OpenAI’s cofounder, visited Poland and spoke with Prime Minister Mateusz Morawiecki about possible collaboration and the sharing of public data with OpenAI. This could represent a great opportunity for Poland, especially if local companies would be invited to cooperate. At the same time, we should ensure that not only big companies like OpenAI but also smaller Polish companies have access to national data or weights created with their usage when working on AI projects. I believe this is in line with the OpenAI’s intention to democratize AI and I hope it will be also an attitude of other big players.
A new literacy
Finally, we should use all available means to make AI a tool which is in the hands of people. In the same way as computers and the Internet made us much more productive, we could use algorithms to enhance our capabilities. As much effort as possible should be put into helping society attain new literacy connected with AI in order to further develop, which would mean the ability to ask the right questions and then use the provided answers in an efficient way. This should happen across schools, universities, reskilling and upskilling programs, as well as grant initiatives. We now have a rare window of opportunity to use emerging “superpowers” to drive massive and democratized innovation.
However, if we don’t learn how to build and develop AI and are merely consumers of someone else’s easy solutions, our capabilities could diminish rather than expand. In the late 1970s, James Flynn discovered that, mainly due to education, intelligence marker levels in US society grew significantly. Today, we face a reversal in this trend with a decline in IQ most visible in the youngest generation. Imagine how these stats will look after 10 years of using ChatGPT and without the ability to proactively use AI for creating new ideas and types of innovation. So, our goal should be to make sure that people are capable of using this new tool for multi-faceted personal growth and prevent them from merely becoming consumers and data providers.
People implementing AI solutions will dramatically change our world. Do we want this, and if we do, are we prepared? In any case, the time has come for society as a whole to take steps to preempt the most detrimental effects that AI may bring.