The phenomenon of the rapid transformation of the world under the influence of artificial intelligence (AI) is driven by its ability to accelerate innovation, reshape economic processes, influence how people interact with technologies, and open new horizons for humanity’s development. Recent years have been marked by an unprecedented pace of AI adoption in key areas of life, significantly transforming the labor market, scientific research, social communications, global security, and the economy. However, these changes demand a deliberate approach to regulation, the establishment of ethical boundaries, and society’s adaptation to new technological realities. It is crucial to ensure that AI remains a tool of progress that fosters sustainable development rather than becoming a threat to stability and human well-being.
This is the subject of an article by Academician of the National Academy of Sciences of Ukraine, Mykhailo Zgurovsky.
Progress in AI has been made possible by the combination of several key factors. High-performance graphics processing units (GPUs), tensor processing units (TPUs), neural processing units (NPUs), and central processing units (CPUs), together with cloud technologies, have enabled the processing of vast amounts of data and the training of deep neural networks. The internet and digital platforms have created powerful data sources, significantly enhancing the efficiency of machine learning algorithms. At the same time, breakthroughs in deep learning have allowed AI to integrate into key industries such as healthcare, logistics, energy, education, and defense.
The synergy between AI and fundamental sciences plays an important role. Physical and chemical laws serve as the foundation for creating new models, while AI, in turn, accelerates the analysis of complex systems. A vivid example of this is the 2024 Nobel Prizes in Physics and Chemistry: the works of John Hopfield and Geoffrey Hinton laid the groundwork for modern artificial neural networks, while the research of David Baker, Demis Hassabis, and John Jumper enabled AI to predict the three-dimensional structure of proteins.
AI also plays a critical role in the field of security, providing strategic advantages to countries leading its development. Automation of military systems, cybersecurity, and threat analysis are shaping the new rules of global competition. Lagging in this area increases a nation’s vulnerability to modern threats, especially in cyberspace.
Thus, AI is not only a technological breakthrough but also a key factor shaping the global balance of power. Its further development will determine the future of economies, international relations, and security, and countries that can effectively adapt these technologies will gain significant advantages in the modern world.
- FEATURES OF AI DEVELOPMENT IN LEADING COUNTRIES OF THE WORLD
Leading countries were the first to recognize the strategic importance of artificial intelligence (AI) and invested in its development, leveraging this technology for economic growth, security, and competitiveness. However, the level of investment, research, integration of AI into the economy, and regulatory frameworks vary significantly by region.
The United States maintains leadership in AI, having invested over $300 billion in the past decade. Programs like DARPA (Defense Advanced Research Projects Agency) and NSF (National Science Foundation), along with tech giants such as Google, Microsoft, and OpenAI, are expanding AI applications in defense, healthcare, education, and space exploration. With a high concentration of talent and private capital, the U.S. remains at the forefront of the AI revolution.
China is rapidly closing the gap, investing $200–250 billion and prioritizing government initiatives such as the “AI 2030” strategy and companies like Alibaba, Tencent, and Baidu. China leads in integrating AI into smart cities, surveillance systems, and defense technologies, utilizing vast amounts of data to enhance algorithms.
The European Union has invested $100–120 billion, focusing on AI ethics, sustainable development, cybersecurity, and quantum computing through the Horizon Europe innovation support program and projects by SAP, Siemens, and DeepMind. The EU is a global leader in data protection, with the AI Act—the world’s first comprehensive legal framework for regulating the development, use, and deployment of AI, particularly in critical areas. The Act took effect on August 1, 2024, with most rules to be applied starting August 2, 2026, positioning the EU as a global standard-setter.
Japan, with its “Society 5.0” strategy, has allocated $50–60 billion to robotics, autonomous systems, cognitive algorithms, and quantum computing. Tech giants such as SoftBank, Toyota, and Fanuc develop intelligent robotic systems for industry and social sectors, while Riken AIP and the University of Tokyo focus on advanced algorithms.
Israel ($30–40 billion) emphasizes cybersecurity and defense systems. Startups like Mobileye and Waze integrate AI into security and autonomous transport, maintaining leadership in high-tech domains.
The United Kingdom ($25–35 billion) supports research through DeepMind and leading universities like the University of Oxford and University College London (UCL), focusing on healthcare, data protection, and defense.
Canada ($20–30 billion) develops ethical AI standards through the Pan-Canadian AI Strategy. The Canadian government tasked CIFAR (Canadian Institute for Advanced Research) with implementing the strategy, including establishing AI research institutes in Montreal, Toronto, and Edmonton, training qualified specialists, and ensuring ethical and responsible use of AI technologies.
South Korea ($15–25 billion) invests in cybersecurity, robotics, autonomous systems, industrial automation, and mobile applications. The Korea Advanced Institute of Science and Technology (KAIST), a network of 23 institutes, drives AI and machine learning research. Companies like Samsung and LG actively implement cutting-edge technologies, creating competitive global products.
India ($10–15 billion) focuses on agriculture, healthcare, infrastructure management, big data analysis, process automation, and national security under the Digital India program. The Indian Institutes of Technology (IITs) network, comprising 23 institutions, conducts research in mathematical modeling and applied AI aspects.
Australia ($5–10 billion) applies AI for environmental monitoring, agriculture, and educational technologies. Government programs such as the AI Action Plan and Emerging Technologies Fund support research and technology integration into key sectors.
Thus, the development of AI in leading countries reflects diverse strategic approaches: the U.S. and China invest in global leadership, the EU emphasizes ethics and safety, while Japan, Israel, and other nations adapt AI to their national priorities.
- THE STRATIFICATION OF THE WORLD DUE TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE
The development of artificial intelligence (AI) is shaping a new global landscape of inequality that affects all key aspects of societal development: the economy, security, science, technology, and education. This process is driven by uneven access to advanced technologies, knowledge, and resources, and its consequences are becoming increasingly evident. While leading economies invest colossal amounts in AI and gain strategic advantages, many countries fall behind, creating long-term imbalances and reinforcing the uneven progress of technological development.
Economic stratification caused by AI development is one of the most pronounced aspects of this process. In 2024, global investments in AI reached a record $500 billion, with over 70% concentrated in G7 countries and China. In developed nations, AI’s contribution to GDP growth is estimated at 10–15%, whereas in developing countries, it rarely exceeds 2–3%. The level of automation in production and business processes also highlights this deep imbalance: in technologically advanced economies, over 50% of routine tasks are automated, while in less developed nations, this figure does not exceed 20%. By 2035, AI’s total contribution to global GDP is expected to exceed $15 trillion, but the majority of this growth will be concentrated in technological hubs, while most countries will only partially benefit from these advances. Thus, instead of becoming a factor for leveling economic development, AI adoption only deepens the existing gap between rich and poor countries.
The disparity in the security sector is even more pronounced. Developed countries annually invest over $200 billion in cybersecurity, enabling them to use AI for attack prediction, autonomous defense systems development, and comprehensive cyberspace monitoring. Meanwhile, average spending by developing countries does not exceed $5 billion, limiting their capabilities to basic data protection methods. As a result, the number of successful cyber incidents in less developed nations is 50–70% higher than in technological leaders. Access to advanced defense technologies determines not only data security but also a state’s geopolitical resilience. Given the current pace of development, this imbalance will only deepen in the coming years, as developing countries will struggle to compete with leaders in defense and security technologies. However, there are exceptions to this trend, particularly in asymmetric defense methods. For example, Ukraine’s experience demonstrates that effective use of AI for data analysis from drones, satellites, and electronic warfare systems can provide a strategic advantage even for countries with limited resources.
Scientific and technological stratification is another indicator of global imbalance. In 2024, the U.S. and China accounted for over 80% of global AI patents, while Africa and Latin America combined held less than 5%. High-performance computing infrastructure, essential for training advanced AI models, is also concentrated in a small number of countries: over 90% of all exascale-class data centers are located in G7 countries and China. This means that nations without access to such infrastructure must rely on external services and platforms, placing them at the mercy of international corporations. Projections for 2035 indicate that without targeted technology transfer programs, this situation will only worsen, as major breakthroughs in the field will remain confined to a few geopolitical blocs.
Educational stratification is one of the most critical factors determining the future dynamics of AI development in different parts of the world. In developed countries, over 60% of schools and universities have already integrated AI into their curricula, enabling students to work with cutting-edge technologies from an early age. In developing nations, this figure does not exceed 10%, and access to quality educational materials is severely limited. Approximately 80% of leading online AI courses are created in English, significantly complicating learning for non-English-speaking students. The U.S. and China produce more than 50% of global AI specialists annually, reinforcing their leadership. If current trends persist, by 2030, the digital divide in education will become even more pronounced, with long-term consequences for scientific and economic development.
Overall, the stratification of the world due to AI development is a complex process encompassing economic, scientific-technological, educational, and security aspects. Key indicators point to significant increases in inequality between countries that possess advanced technologies and make substantial investments in their development and those that are only beginning to adapt. However, there are some exceptions within this trend. Access to cloud computing and open AI models (such as LLaMA, BLOOM) allows developing countries to innovate without requiring extremely high investments in their computational infrastructure. For example, startups in Ukraine and some African nations are leveraging AI to optimize agribusiness, healthcare, and logistics, demonstrating the potential to overcome inequality.
The forecast for the next decade shows that without active international cooperation, this gap will only widen. Essential steps to address this issue include global initiatives in technology transfer, support for scientific research in developing countries, and the creation of educational programs accessible to all. Only through joint efforts can the negative consequences of technological stratification be minimized, and AI used as a tool for global development to overcome social and economic barriers in the future.
- OPPORTUNITIES AND POTENTIAL FOR UKRAINE
Despite the ongoing war and economic instability, Ukraine retains the potential to integrate into the global artificial intelligence (AI) landscape. At the end of December 2024, Ukraine approved the Strategy for Digital Development of Innovative Activities until 2030, aimed at fostering technological progress, attracting investments, and creating a resilient innovation ecosystem.
![](https://svit.kpi.ua/wp-content/uploads/2025/01/photo_2025-01-31_20-24-01-2-1.jpg)
Despite the challenges, the country demonstrates a commitment to advancing AI, digital transformation, and security, which are crucial for its future competitiveness. The war has led to a brain drain and the destruction of infrastructure, yet Ukrainian professionals continue to play a vital role in the global tech industry. According to Interfax, over 65,000 IT specialists, representing more than 20% of Ukraine’s IT community, are currently working abroad. At the same time, domestic universities graduate 25,000–30,000 new specialists annually, maintaining the country’s talent pool.
Fundamental science remains a key driver of AI development. Research in mathematics, physics, and computational engineering provides the theoretical foundation for innovative solutions. Scientists at the V. M. Glushkov Institute of Cybernetics are conducting advanced studies on neural networks and big data processing. In 2024, over 50 research papers by Ukrainian scholars were published in leading international journals such as IEEE Transactions on Neural Networks and Nature Machine Intelligence.
AI plays a pivotal role in national security and defense. Ukraine actively utilizes AI for threat monitoring, combat analysis, logistics optimization, and intelligence automation. AI systems analyze satellite images to identify landmines and monitor occupied territories. In 2023, Ukrainian AI-powered drones received international recognition at NATO defense exhibitions. The country also has significant expertise in cybersecurity, positioning itself as a leader in protecting digital infrastructures.
The energy and environmental sectors represent another promising avenue. Ukrainian startups are developing AI-driven solutions for optimizing energy systems, integrating renewable energy sources, and conducting environmental monitoring. More than 10 Ukrainian companies are working on environmental monitoring technologies with the support of international donors.
Another important focus is natural language processing (NLP). Ukrainian companies like Grammarly have achieved global success in creating text analyzers, automated translation tools, and voice assistants. Equally significant is the field of computer vision, applied in areas such as autonomous vehicles, medical diagnostics, and defense technologies. Ukrainian IT companies have become trusted partners of leading global tech corporations.
Thus, Ukraine possesses all the prerequisites for integration into the global AI development ecosystem. Despite the challenges of war, the country has a robust educational and scientific foundation, highly qualified professionals, and a proven track record in critical areas. Investments in AI will not only strengthen national defense but also serve as a foundation for post-war recovery and economic growth.
4. ENERGY CHALLENGES ON THE PATH TO AI DEVELOPMENT
The development of artificial intelligence is accompanied by a rapid increase in energy consumption, driven by the growing computational intensity and scale of AI technologies. Training large models requires processing vast amounts of data, significantly increasing electricity costs. For instance, training the GPT-4 model demands over 50 GWh of electricity, while a single query to ChatGPT consumes approximately 2.9 Wh of electricity, compared to just 0.3 Wh for a standard Google search query. Given the 9 billion daily queries to ChatGPT, this amounts to nearly 10 TWh of additional electricity per year, equivalent to the annual consumption of a city with a population of approximately 1 million people.
![](https://svit.kpi.ua/wp-content/uploads/2025/01/photo_2025-01-31_21-18-17-1-1024x576.jpg)
Modern data centers that support AI operations are key consumers of electricity. As of 2024, there are approximately 11,000 data centers worldwide, forming a critical component of the global digital infrastructure. These centers are predominantly located in the United States (over 5,400), the EU (over 3,000), China (over 400), the United Kingdom (over 400), Canada (over 250), Japan (over 200), India (over 250), and other economically developed countries with significant investments in digital technologies and robust energy systems. In 2024, the total energy consumption of data centers globally exceeded 600 TWh, and by 2030, it is projected to grow to 1,065 TWh, accounting for about 4% of global energy consumption. (This estimate is based on current AI development trends, which may change with the implementation of energy-efficient technologies such as neural network quantization, new processor architectures, and distributed computing.)
A significant portion of this energy is consumed not only for computations but also for cooling systems that ensure uninterrupted server operations. Estimates suggest that 30–40% of data center energy is spent on cooling, with outdated systems consuming as much as 60%.
The primary driver of increasing energy consumption is the growing complexity of AI models, which require powerful computational resources. For instance, models like GPT-4 (~1 trillion parameters), Claude 3 (~70 billion parameters), and Gemini 1.5 (~300 billion parameters) consume between 10 and 100+ TWh of electricity during training and inference, equivalent to the energy usage of several million-person cities or even small countries. This trend underscores the need for energy-efficient solutions for AI development.
One of the primary areas of focus is improving processors. Graphics processing units (GPUs) offer high performance but consume significant energy (300–400 W per unit). Tensor processing units (TPUs), specialized for deep learning, are more energy-efficient (150–200 W). Neural processing units (NPUs) consume the least energy (5–50 W) and are used in embedded systems. Central processing units (CPUs) have moderate energy consumption (35–150 W) and, although less efficient for AI computations, remain a versatile choice for multitasking operations. Optical processors (Optical Computing Units, OCUs) are still in the research and development phase but hold the potential to revolutionize computing by offering significant advantages in speed and energy efficiency.
However, even the most efficient hardware solutions cannot fully address the problem without optimizing algorithms themselves. Quantization of neural networks and parameter reduction can lower computational costs by up to 40%. At the same time, exploring alternative energy sources for powering data centers is critically important. Integrating renewable energy sources such as solar and wind power can reduce CO₂ emissions by up to 30% with large-scale adoption.
Improving cooling systems in data centers is equally vital. Liquid cooling can reduce energy consumption by 30–50%, while natural cooling methods can achieve an additional 20% energy savings. Emerging technologies, such as optical processors, could potentially provide a tenfold reduction in energy consumption compared to traditional electronic chips.
Optimizing the distribution of computations across networks reduces the overall load on data centers. Transitioning to energy-efficient inference using compact models can decrease costs by 50–70% for specific tasks. Moreover, quantum computing has the potential to reduce energy consumption by a factor of 100 for certain optimization problems.
The continued development of AI is impossible without fundamentally new solutions in energy conservation. These solutions require a comprehensive approach that includes hardware improvements, algorithm optimization, the integration of clean energy, and the implementation of innovative computing methods. These measures will reduce the environmental impact of technology and ensure the sustainable development of digital infrastructure in the future.
5. A NEW PHASE IN AI DEVELOPMENT
Recent advancements in artificial intelligence at the beginning of 2025 mark a new phase in its evolution, characterized by large-scale initiatives and innovative approaches. Two key projects that have captured global attention—America’s Stargate and China’s startup DeepSeek—illustrate differing strategies for shaping the future of AI. Despite their differences, both aim to enhance AI efficiency and accessibility.
Stargate is a massive initiative by OpenAI, Oracle (USA), and SoftBank (Japan) with planned investments of $500 billion to create a new digital ecosystem. This project aims to bolster the United States’ leadership in AI, stimulate scientific research, and generate hundreds of thousands of jobs. However, its focus lies on expanding existing technologies rather than pursuing radical innovations.
In contrast, DeepSeek takes a fundamentally different approach. Founded in 2023 by Chinese entrepreneur Liang Wenfeng, the startup made a breakthrough in early 2025 with its AI assistant, which quickly outperformed ChatGPT in App Store rankings. Its models, DeepSeek-V3 and DeepSeek-R1, are notable for their impressive resource efficiency, utilizing a Mixture-of-Experts (MoE) architecture that activates only 37 of 671 billion parameters per query. This innovation reduces computational costs, cuts energy consumption by 10–20 times compared to GPT-4, and lowers water usage for server cooling by 30–50%.
DeepSeek-V3 and DeepSeek-R1 outperform well-known models like GPT-3 and GPT-4 from OpenAI due to their innovative solutions, particularly the Mixture-of-Experts (MoE) architecture. DeepSeek optimizes the model training process; for instance, training DeepSeek-V3 cost only $5.58 million compared to $100 million for GPT-4. Additionally, reinforcement learning automates response generation, reducing training costs without sacrificing performance. The use of knowledge distillation allows smaller models to learn from larger ones while maintaining high efficiency. These optimizations significantly lower training expenses and energy consumption, making the models competitive even with limited resources.
These two approaches to AI development reflect current trends: Stargate represents large-scale, extensive expansion of technological infrastructure, while DeepSeek offers more efficient algorithmic solutions. While the former focuses on massive investments in AI infrastructure and energy systems, the latter redefines the paradigm of AI creation. These strategic directions illustrate that the AI industry is evolving simultaneously along two dimensions—through capital-intensive growth and groundbreaking innovations that will shape its future.
Artificial intelligence is already transforming the world, and decisions made in the coming years will determine the future of global society. Among the critical challenges are energy efficiency in computation, reducing the technological gap between countries, regulating military use of AI, and adapting labor markets to automation. A balanced approach to these issues will be pivotal in forming competitive advantages at the global level. Countries that can integrate innovations into their economic and security strategies will have greater opportunities for successful development—a significant challenge but also a promising opportunity for Ukraine.
Mykhailo Zgurovsky, Academician of the National Academy of Sciences of Ukraine