logo
CodeSignal Report Ranks Universities by Measurable Technical Skills, Highlighting Top Engineering Talent Nationwide

CodeSignal Report Ranks Universities by Measurable Technical Skills, Highlighting Top Engineering Talent Nationwide

Yahoo14-05-2025

Nearly 1 in 3 top-performing students come from universities overlooked by traditional rankings
SAN FRANCISCO, May 14, 2025 /PRNewswire/ -- CodeSignal, a leading skills assessment and experiential learning platform, today unveils its fourth annual University Ranking Report, an university ranking methodology based purely on students' verified coding skills.
Unlike traditional rankings that rely on legacy signals, CodeSignal's report offers an objective, data-driven alternative: one that evaluates universities based on how well their students perform on an assessment of real-world coding skills. In an AI-transformed workforce, the ability to think computationally, solve problems, and write strong foundational code remains critical, regardless of where a student went to school.
By analyzing thousands of General Coding Assessments (GCA) completed by students worldwide, CodeSignal's Talent Science Team reveals a powerful conclusion: top engineering talent is everywhere.
Here are the top 15 universities for 2025:
Carnegie Mellon University
Massachusetts Institute of Technology
Stony Brook University
University of California, Los Angeles
University of Pennsylvania
California Institute of Technology
University of California, San Diego
Duke University
San José State University
University of Southern California
Rice University
Yale University
Georgia Institute of Technology
Johns Hopkins University
Indiana University
High-level results:
28.4% of high-scorers come from schools not included in the US News & World Report's top 50 undergraduate engineering programs.
12 of the top 50 schools in our skill-based ranking did not make the US News & World top 50.
Two of the top 10 US schools in our rankings, Stony Brook University (#3) and San José State University (#9), didn't make the US News & World top 50.
Korea Advanced Institute of Science & Technology is the top non-US school for software engineering talent this year, ranking just below Rice University (#12 on the US list).
"This report is a celebration of the universities equipping students with the skills that matter most," said Tigran Sloyan, CEO and Co-Founder of CodeSignal. "When we focus on what students can actually do, not just where they studied, we uncover incredible talent from institutions of all types. It's a reminder that great engineers are everywhere, and we need to broaden how we recognize and recruit them."
While traditional rankings reward legacy signals, CodeSignal's 2025 University Ranking Report focuses on outcomes – what students can actually do when faced with real-world engineering challenges. CodeSignal's data makes the case that technical talent isn't confined to a short list of name-brand schools. It's everywhere. For employers competing in an AI-driven economy, this report is a call to rethink where, and how, they discover their next generation of engineers.
To view the full report, please visit: https://codesignal.com/university-ranking-report-2025
About CodeSignalCodeSignal is how the world discovers and develops the skills that will shape the future. Our AI-native skills assessment and experiential learning platform helps organizations hire, train, and grow talent at scale while empowering individuals to advance their careers.
Whether you're growing your team's potential or unlocking your own, CodeSignal meets you where you are and gets you where you need to go. With millions of skills assessments completed, CodeSignal is trusted by companies like Netflix, Capital One, Meta, and Dropbox and used by learners worldwide.
For more information, visit www.codesignal.com or connect with CodeSignal on LinkedIn.
View original content to download multimedia:https://www.prnewswire.com/news-releases/codesignal-report-ranks-universities-by-measurable-technical-skills-highlighting-top-engineering-talent-nationwide-302454999.html
SOURCE CodeSignal

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I use these 3 ChatGPT prompts to work smarter and stay competitive — here's how
I use these 3 ChatGPT prompts to work smarter and stay competitive — here's how

Yahoo

time21 minutes ago

  • Yahoo

I use these 3 ChatGPT prompts to work smarter and stay competitive — here's how

When you buy through links on our articles, Future and its syndication partners may earn a commission. If you've been following the news, you've probably seen it: AI-driven layoffs are on the rise. From newsroom cuts to tech giants automating tasks once handled by entire teams, AI is getting smarter and changing the job market faster than anyone expected. Whether you're trying to protect your current job or looking for your next role, the uncertainty is real. Even though I test AI tools for a living, I found myself asking: Could AI replace me, too? That's when I tried a simple exercise with ChatGPT — using just a few prompts to assess my career risk and figure out how to stay ahead of AI. Here's exactly how you can do the same. Start by copying and pasting your current resume into ChatGPT (or your preferred chatbot). You can also upload it directly, just be sure you have removed all personal, confidential or sensitive information first. If you don't have a formal resume handy, you could use ChatGPT to write one, or you can also provide a summary of your current role, responsibilities, and major skills. Once you've shared your background, type this prompt:"Based on my resume and skills, how soon will AI take my job?" You might be surprised by the response. AI can provide a candid, and often eye-opening, assessment of how vulnerable your role is to automation — and which aspects of your job are still uniquely human. It may flag parts of your skill set that are becoming less valuable in the current market. But, it may also give you reassurance based on your skills and ability to adapt. This is also a good time to enter the description of a job you're hoping to land in the next few years. Will it even exist? Next, follow up with this prompt: "What skills do I need to learn to pivot and future-proof my career?" The chatbot will typically generate a list of in-demand skills that can help you adapt, pivot to more secure roles or even transition into entirely new career paths. These often include areas where human expertise still has an edge — think creativity, emotional intelligence, leadership, strategy, problem-solving and relationship-building. Based on what the chatbot told you, go ahead and take your prompting a step further by asking ChatGPT: "What's the best way for me to start learning these skills?" In seconds, you'll get suggestions for online courses, certifications, books, podcasts and communities that can help you upskill — often tailored to your current industry or experience level. This quick exercise won't eliminate the risks of an AI-driven job market, but it will give you clarity and maybe even peace of mind as you discover new ways to use your skills. These prompts turn an overwhelming question (will AI take my job?) into an actionable plan. More importantly, it serves as a wake-up call: never stop learning. There are numerous ways you can elevate your human skillset and even develop skills to use AI to do your job better. The best way to stay relevant is to continuously evolve your skills and, where possible, double down on the human qualities AI can't easily replicate. That's your edge in an AI-powered ChatGPT the tough questions is a habit I now recommend to anyone, in any industry. I use the 'blank line' prompt every day now in ChatGPT — here's why Google just launched 'Search Live' — here's why you'll want to try it Midjourney video generation is here — but there's a problem holding it back

OpenAI scrubs news of Jony Ive deal amid trademark dispute
OpenAI scrubs news of Jony Ive deal amid trademark dispute

Yahoo

time34 minutes ago

  • Yahoo

OpenAI scrubs news of Jony Ive deal amid trademark dispute

OpenAI has removed news of its deal with Jony Ive's io from its website. The takedown comes amid a trademark dispute filed by iyO, an AI hardware startup. OpenAI said it doesn't agree with the complaint and is "reviewing our options." Turns out "i" and "o" make for a popular combination of vowels in the tech industry. Sam Altman's OpenAI launched a very public partnership with io, the company owned by famed Apple designer Jony Ive, in May. The announcement included a splashy video and photos of the two of them looking like old friends. On Sunday, however, OpenAI scrubbed any mention of that partnership from its website and social media. That's because iyO, a startup spun out of Google's moonshot factory, X, and which sounds like io, is suing OpenAI, io, Altman, and Ive for trademark infringement. iyO's latest product, iyO ONE, is an "ear-worn device that uses specialized microphones and bone-conducted sound to control audio-based applications with nothing more than the user's voice," according to the suit iyO filed on June 9. The partnership between OpenAI and io, meanwhile, is rumored to be working on a similarly screen-less, voice-activated AI device. According to its deal with OpenAI, Ive's firm will lead creative direction and design at OpenAI, focusing on developing a new slate of consumer devices. When the deal was announced, neither party shared specific details about future products. However, Altman said the partnership would shape the "future of AI." iyO approached OpenAI earlier this year about a potential collaboration and funding. OpenAI declined that offer, however, and says it is now fighting the trademark lawsuit. "We don't agree with the complaint and are reviewing our options," OpenAI told Business Insider. Read the original article on Business Insider

ChatGPT Has Already Polluted the Internet So Badly That It's Hobbling Future AI Development
ChatGPT Has Already Polluted the Internet So Badly That It's Hobbling Future AI Development

Yahoo

timean hour ago

  • Yahoo

ChatGPT Has Already Polluted the Internet So Badly That It's Hobbling Future AI Development

The rapid rise of ChatGPT — and the cavalcade of competitors' generative models that followed suit — has polluted the internet with so much useless slop that it's already kneecapping the development of future AI models. As the AI-generated data clouds the human creations that these models are so heavily dependent on amalgamating, it becomes inevitable that a greater share of what these so-called intelligences learn from and imitate is itself an ersatz AI creation. Repeat this process enough, and AI development begins to resemble a maximalist game of telephone in which not only is the quality of the content being produced diminished, resembling less and less what it's originally supposed to be replacing, but in which the participants actively become stupider. The industry likes to describe this scenario as AI "model collapse." As a consequence, the finite amount of data predating ChatGPT's rise becomes extremely valuable. In a new feature, The Register likens this to the demand for "low-background steel," or steel that was produced before the detonation of the first nuclear bombs, starting in July 1945 with the US's Trinity test. Just as the explosion of AI chatbots has irreversibly polluted the internet, so did the detonation of the atom bomb release radionuclides and other particulates that have seeped into virtually all steel produced thereafter. That makes modern metals unsuitable for use in some highly sensitive scientific and medical equipment. And so, what's old is new: a major source of low-background steel, even today, is WW1 and WW2 era battleships, including a huge naval fleet that was scuttled by German Admiral Ludwig von Reuter in 1919. Maurice Chiodo, a research associate at the Centre for the Study of Existential Risk at the University of Cambridge called the admiral's actions the "greatest contribution to nuclear medicine in the world." "That enabled us to have this almost infinite supply of low-background steel. If it weren't for that, we'd be kind of stuck," he told The Register. "So the analogy works here because you need something that happened before a certain date." "But if you're collecting data before 2022 you're fairly confident that it has minimal, if any, contamination from generative AI," he added. "Everything before the date is 'safe, fine, clean,' everything after that is 'dirty.'" In 2024, Chiodo co-authored a paper arguing that there needs to be a source of "clean" data not only to stave off model collapse, but to ensure fair competition between AI developers. Otherwise, the early pioneers of the tech, after ruining the internet for everyone else with their AI's refuse, would boast a massive advantage by being the only ones that benefited from a purer source of training data. Whether model collapse, particularly as a result of contaminated data, is an imminent threat is a matter of some debate. But many researchers have been sounding the alarm for years now, including Chiodo. "Now, it's not clear to what extent model collapse will be a problem, but if it is a problem, and we've contaminated this data environment, cleaning is going to be prohibitively expensive, probably impossible," he told The Register. One area where the issue has already reared its head is with the technique called retrieval-augmented generation (RAG), which AI models use to supplement their dated training data with information pulled from the internet in real-time. But this new data isn't guaranteed to be free of AI tampering, and some research has shown that this results in the chatbots producing far more "unsafe" responses. The dilemma is also reflective of the broader debate around scaling, or improving AI models by adding more data and processing power. After OpenAI and other developers reported diminishing returns with their newest models in late 2024, some experts proclaimed that scaling had hit a "wall." And if that data is increasingly slop-laden, the wall would become that much more impassable. Chiodo speculates that stronger regulations like labeling AI content could help "clean up" some of this pollution, but this would be difficult to enforce. In this regard, the AI industry, which has cried foul at any government interference, may be its own worst enemy. "Currently we are in a first phase of regulation where we are shying away a bit from regulation because we think we have to be innovative," Rupprecht Podszun, professor of civil and competition law at Heinrich Heine University Düsseldorf, who co-authored the 2024 paper with Chiodo, told The Register. "And this is very typical for whatever innovation we come up with. So AI is the big thing, let it go and fine." More on AI: Sam Altman Says "Significant Fraction" of Earth's Total Electricity Should Go to Running AI

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store