
China's Humanoid Robots Poised to Revolutionize Manufacturing with AI Power
In a Shanghai warehouse, humanoid robots work relentlessly, folding clothes, making sandwiches, and completing other tasks for up to 17 hours a day. The aim is to gather data that will refine these robots, creating machines that are set to transform the workforce and manufacturing processes globally.
Chinese humanoid robot startup AgiBot, which operates this facility, envisions a future where robots could not only assemble products but also assemble themselves. The technology promises to change the way humans work, live, and interact with machines.
Chinese President Xi Jinping, during his recent visit to AgiBot's Shanghai facility, emphasized the significance of robots for the country's future, even joking that these machines might one day form a football team. The push for innovation in this sector coincides with Xi's broader call for private firms to help propel China's economy amidst global challenges like trade tensions and population decline.
China is striving to build an industrial revolution driven by AI-powered humanoid robots to keep its manufacturing edge and overcome economic hurdles. Chinese humanoid robots are now not just feats of technology but are being equipped with AI to make them commercially viable for real-world applications.
AI Advancements and Government Support Fuel Growth
China's rapid strides in humanoid robotics owe much to significant advancements in AI, backed by homegrown tech giants like DeepSeek, and strong government subsidies. Over the past year alone, the government has allocated over $20 billion to the sector, with an additional $137 billion fund earmarked to support AI and robotics startups. In 2024, state procurement for humanoid robots soared to 214 million yuan from just 4.7 million yuan the previous year.
China's focus on building a competitive edge in humanoid robotics is also supported by its well-established supply chain, which allows for the efficient production of robot components at competitive prices. Experts predict that by 2030, humanoid robot production costs could be halved, making them accessible for large-scale deployment across factories and other industries.
Impact on Manufacturing and Labor Force
China's push to adopt humanoid robots is poised to reshape manufacturing. With major strides in robot agility, including feats like running marathons and performing acrobatics, humanoids are moving beyond the realm of novelty. The Chinese government sees these robots as a potential solution to labor shortages, particularly in industries like elderly care, where the aging population is placing growing demands on services.
However, the rise of humanoid robots has sparked discussions about the future of employment. With over 123 million workers in China's manufacturing sector, experts predict that intelligent robots could replace a significant portion of this workforce. Some lawmakers have even proposed an AI unemployment insurance program to support displaced workers.
While there are concerns about job losses, the Chinese government suggests that the long-term benefits of automation will outweigh the short-term disruptions, with a particular focus on relieving humans from dangerous or repetitive jobs.
The Road Ahead
As humanoid robots become more advanced and capable of performing complex tasks, the global manufacturing landscape will change. With China at the forefront of this technological shift, the potential for economic growth, productivity improvements, and new job creation in emerging sectors looks promising. However, how China manages the social and economic challenges posed by widespread automation will be key to shaping its future role in the global economy.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
2 hours ago
- Business Standard
Iran MPs approve Hormuz closure, US warns of oil, trade disruption
Iran's Parliament, known as the Majlis, has approved the closure of the Strait of Hormuz following US strikes on Iranian nuclear sites, according to a report by state media outlet PressTV on Sunday, 22 June 2025. The report cites senior lawmaker Esmaeil Kowsari. Kowsari, a member of the Majlis Committee on National Security and Foreign Policy, said lawmakers had reached an agreement to close the strait in response to US actions and the lack of an international response. Kowsari said, 'The Parliament has come to the conclusion that it should close the Hormuz Strait, but the final decision lies with the Supreme National Security Council.' The Strait of Hormuz, situated at the entrance to the Persian Gulf, is one of the world's most important trade routes, especially for oil imports. It is estimated that around 20 per cent of the global oil supply — about 17 to 18 million barrels per day — passes through the strait. It is also a major route for the export of liquefied natural gas (LNG), particularly from Qatar. Reacting to Iran's decision, US Secretary of State Marco Rubio called the move 'economic suicide'. Speaking to Fox News, Rubio said, 'I encourage the Chinese government in Beijing to call them about that, because they heavily depend on the Straits of Hormuz for their oil.' Rubio further added, 'If they do that, it will be another terrible mistake. It's economic suicide for them if they do it. And we retain options to deal with that, but other countries should be looking at that as well. It would hurt other countries' economies a lot worse than ours.' Importantly, the strait is the only maritime route connecting the Persian Gulf to open seas. It is used by major oil-producing countries, including Iran, Saudi Arabia, Iraq, Kuwait and the UAE. The media report from PressTV also warns that any interruption in the Strait of Hormuz could cause global oil prices to surge and undermine international energy security. Citing experts, the report warned that a major disruption in the strait could halt operations at multinational companies within days due to fuel shortages. India, which imports about 80 per cent of its oil, is likely to feel the impact of any disruption in the strait. The route is also critical for ships travelling to and from Indian ports. According to a report by The Hindu on 13 June, any disruption in the Strait of Hormuz could lead to a 40–50 per cent rise in shipping costs and delays of up to 15–20 days.


Economic Times
2 hours ago
- Economic Times
Will India's AI Action Summit redefine global AI governance?
Then there's the tech in between After Britain, South Korea and France, it's India's turn to host the next AI Action Summit. GoI has invited public comments until June 30 to shape the summit, which sets the tone for AI governance. India is expected to bring global majority perspectives from the margins to the mainstream and exhibit a unique approach to the delicate balancing acts involved in AI there is the question of whether to regulate, and if so, how. The recent US proposal to ban state AI laws for 10 years is seen by many as pro-innovation. By contrast, the EU's AI Act takes a more precautionary, product-safety approach. China's approach tends to tailor regulation to authoritarian state control. Beyond this dichotomy, India is often seen as capable of offering a third way. The summit presents an opportunity for India to showcase elements of this approach and take on the equally thorny question of how open or closed AI development should be. On openness, India can push beyond the binary of 'open or closed' approaches to releasing AI base models. Some argue that AI models must be kept under the control of a small number of people. Others argue that base models should be released with no restrictions. India has no interest in a future where a handful of US and Chinese companies hold the key to advanced AI models and can arbitrarily restrict their use. At the same time, however, openness should not be understood in a purely libertarian way where people can do whatever they want with these we need is a truly open approach that enables independent evaluation of how the foundation models work so that they can be used to innovate without inadvertently importing the latest US political, one-upmanship-driven ideas or Chinese state censorship. Demanding this openness and transparency, followed by independent testing and evaluation, should be a key goal for India with its new AI Safety Institute (ASI). Additionally, ASI must take the lead in ensuring that systems, particularly in high-impact domains such as public services, are secure and reliable. With its 'Safe and Trusted AI' pillar, the IndiaAI mission is encouraging projects on bias mitigation, privacy enhancement and governance testing-themes that should reflect in the summit's agenda-and affirming the stance taken by the EU to push for 'Trustworthy AI'. It is key here, however, that trustworthiness, privacy and safety are not merely demanded of AI systems but rather achieved through effective governance frameworks. Many of the purported benefits of AI are undermined if the data is not secure, if the system responses are unreliable or biased, and if the public turns against the technology due to high-profile scandals. A telling case is that of the 'Child Benefits Scandal' in the Netherlands, where an opaque and discriminatory system mistakenly flagged thousands of families for benefits-related fraud. In response, the Netherlands is working towards improving AI accountability through human rights impact assessments and public databases of government AI systems. Public trust in AI systems can only be achieved through robust transparency and accountability practices. Centring global conversations and policy imperatives on open, transparent and rights-protecting AI development reduces uncertainty and offers a level playing field for smaller players, even if it is not enshrined in dedicated legislation but through an ecosystem of institutional oversight via ASI and adaptation of existing laws, as is the preference of the IndiaAI logic is straightforward-when a tech is built to be rights-respecting and safe, more people trust it, and therefore more people use it, particularly when it can be independently verified. It's a win-win for commerce, rights and the global majority, such frameworks are necessary because without close attention to the impact of AI models, the region risks becoming a testing ground for nascent and half-baked technology that is developed elsewhere. Their absence could result in 'innovation arbitrage,' a term used to refer to the exploitation of regulatory gaps to deploy questionable technology. The harms of AI-driven systems without oversight are well-documented-be it opaque, unaccountable data collection practices that give people no real choice, or flawed algorithmic decisions that impact people's education, employment and healthcare. In demanding openness, transparency, and security, India has an opportunity to work with the global majority countries to develop shared approaches and demands. Demanding such inclusion and space for leadership would allow us to leverage our collective expertise to ensure 'access for all'-a key goal of GoI. The AI Impact Summit is the moment to bring like-minded countries together and lay out a roadmap for how AI development can be driven in a way that benefits the global majority and allows for individual and regional autonomy, instead of cementing hegemony. (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Looking for quick buck in unlisted shares? Better think twice! Small finance banks struggle with perception. Will numbers turn the tide? Aadit Palicha on Zepto dark store raid, dark patterns, and IPO China rare earths blockade: Will electric vehicles assembly lines fall silent? Flames below deck: The silent threat lurking in cargo holds Is India ready to hit the aspirational 8% growth mark? For medium- to long-term investors with moderate risk appetite: 6 large-cap stocks with an upside potential of up to 40% Sin goods, but not sin stocks from a long-term perspective: 6 stocks from liquor industry with an upside potential of up to 34%


Time of India
2 hours ago
- Time of India
Will India's AI Action Summit redefine global AI governance?
Academy Empower your mind, elevate your skills After Britain, South Korea and France, it's India's turn to host the next AI Action Summit . GoI has invited public comments until June 30 to shape the summit, which sets the tone for AI governance . India is expected to bring global majority perspectives from the margins to the mainstream and exhibit a unique approach to the delicate balancing acts involved in AI there is the question of whether to regulate, and if so, how. The recent US proposal to ban state AI laws for 10 years is seen by many as pro-innovation. By contrast, the EU's AI Act takes a more precautionary, product-safety approach. China's approach tends to tailor regulation to authoritarian state control. Beyond this dichotomy, India is often seen as capable of offering a third way. The summit presents an opportunity for India to showcase elements of this approach and take on the equally thorny question of how open or closed AI development should openness, India can push beyond the binary of 'open or closed' approaches to releasing AI base models. Some argue that AI models must be kept under the control of a small number of people. Others argue that base models should be released with no has no interest in a future where a handful of US and Chinese companies hold the key to advanced AI models and can arbitrarily restrict their use. At the same time, however, openness should not be understood in a purely libertarian way where people can do whatever they want with these we need is a truly open approach that enables independent evaluation of how the foundation models work so that they can be used to innovate without inadvertently importing the latest US political, one-upmanship-driven ideas or Chinese state censorship. Demanding this openness and transparency, followed by independent testing and evaluation, should be a key goal for India with its new AI Safety Institute (ASI).Additionally, ASI must take the lead in ensuring that systems, particularly in high-impact domains such as public services, are secure and reliable. With its 'Safe and Trusted AI' pillar, the IndiaAI mission is encouraging projects on bias mitigation, privacy enhancement and governance testing-themes that should reflect in the summit's agenda-and affirming the stance taken by the EU to push for ' Trustworthy AI '.It is key here, however, that trustworthiness, privacy and safety are not merely demanded of AI systems but rather achieved through effective governance frameworks. Many of the purported benefits of AI are undermined if the data is not secure, if the system responses are unreliable or biased, and if the public turns against the technology due to high-profile scandals.A telling case is that of the 'Child Benefits Scandal' in the Netherlands, where an opaque and discriminatory system mistakenly flagged thousands of families for benefits-related fraud. In response, the Netherlands is working towards improving AI accountability through human rights impact assessments and public databases of government AI systems. Public trust in AI systems can only be achieved through robust transparency and accountability global conversations and policy imperatives on open, transparent and rights-protecting AI development reduces uncertainty and offers a level playing field for smaller players, even if it is not enshrined in dedicated legislation but through an ecosystem of institutional oversight via ASI and adaptation of existing laws, as is the preference of the IndiaAI logic is straightforward-when a tech is built to be rights-respecting and safe, more people trust it, and therefore more people use it, particularly when it can be independently verified. It's a win-win for commerce, rights and the global majority, such frameworks are necessary because without close attention to the impact of AI models, the region risks becoming a testing ground for nascent and half-baked technology that is developed elsewhere. Their absence could result in 'innovation arbitrage,' a term used to refer to the exploitation of regulatory gaps to deploy questionable technology. The harms of AI-driven systems without oversight are well-documented-be it opaque, unaccountable data collection practices that give people no real choice, or flawed algorithmic decisions that impact people's education, employment and demanding openness, transparency, and security, India has an opportunity to work with the global majority countries to develop shared approaches and demands. Demanding such inclusion and space for leadership would allow us to leverage our collective expertise to ensure 'access for all'-a key goal of GoI. The AI Impact Summit is the moment to bring like-minded countries together and lay out a roadmap for how AI development can be driven in a way that benefits the global majority and allows for individual and regional autonomy, instead of cementing hegemony.