Latest STAAR Results Raise Concerns Over Student Performance
(Texas Scorecard) – Newly released results of Texas high school students' End-of-Course assessments for 2025 show 'too many students are still not where they need to be academically,' according to the state agency that oversees public education.
The Texas Education Agency released Spring 2025 STAAR End-of-Course assessment results on Tuesday.
STAAR is short for State of Texas Assessments of Academic Readiness, standardized testing 'designed to measure the extent to which a student has learned and is able to apply the defined knowledge and skills in the Texas Essential Knowledge and Skills at each tested grade, subject, and course.'
The STAAR EOC assessments measure whether high school students have mastered end-of-course knowledge and skills they need to progress to the next level and graduate ready for college, a career, or the military.
The results are 'a key measure of how Texas students are performing' in Algebra I, Biology, English I and II, and U.S. History, according to the TEA.
Compared to 2024 results, the percentages of students who 'meet' grade level in Algebra I and Biology increased slightly, while the percentages of students meeting grade level in English and History declined.
Overall performance levels remain poor. Subject mastery ranged from a high of 37 percent for U.S. History to a low of just 8 percent for English II.
Asian students continued to significantly outperform white, Hispanic, and African-American students in all subjects.
'Texas students and educators continue to work hard to demonstrate academic excellence,' said Texas Education Commissioner Mike Morath. 'At the same time, we also recognize that too many students are still not where they need to be academically.'
'Using a reliable system of assessments, we can continue making progress on the strategies that are most effective in improving student learning and long-term success,' he added.
Assessments from 2024 also showed declining scores.
The TEA's annual report for the 2023-24 school year showed reading and math scores for 3rd- and 8th-graders dropped 2-3 percentage points from the previous year, with less than half of 3rd-grade students reading at or above grade level—deficiencies that impact students' later school performance.
Scores from the 2024 National Assessment of Educational Progress (NAEP), known as 'The Nation's Report Card,' showed Texas 4th-grade students' reading scores had dropped two points from the previous tests in 2022 and were two points below the national average. Just 28 percent scored as 'proficient' or better in reading.
STAAR is unpopular with parents and teachers who say it puts too much pressure on students and forces educators to spend too much time 'teaching to the test.'
Proposed legislation to eliminate the high-stakes testing failed to pass during this year's recently concluded legislative session.
Parents can view their students' individual STAAR EOC results by visiting their school system's family portal or TexasAssessment.gov using the unique access code provided by their child's school.
Results for STAAR grades 3–8 assessments will be made publicly available on June 17.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CBS News
3 days ago
- CBS News
STAAR test results are out. Here's how to look up your student's scores.
Results for the State of Texas Assessments of Academic Readiness exams were released this week, showing improvement in certain areas across the state. Results for math, reading language arts, science and social studies STAAR exams that were taken this spring were released on June 17. Results for exams taken in June will be released on July 31, according to the Texas Education Agency How to look up your student's STAAR test results Log on to and enter your student's unique six-character code, date of birth and legal first name. Your student's unique code should be the same every year. If you don't know your student's code, you can look it up under "Information and Support" and entering their first name and Social Security number. 2025 STAAR test results show slight improvement across Texas Students across Texas continue to show growth in reading, but more than half remain below grade level in math, a concern for long-term academic success, test results show. Despite the positive momentum, some experts remain cautious. "It was really encouraging to see continued growth in reading," said Gabe Grantham, policy advisor for the nonprofit Texas 2036. "But more than half our students are below grade level in math, which is just not okay when we're thinking about how core those skills are for academic and post-academic success."


Time Magazine
4 days ago
- Time Magazine
ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study
Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT's Media Lab has returned some concerning results. The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI's ChatGPT, Google's search engine, and nothing at all, respectively. Researchers used an EEG to record the writers' brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and 'consistently underperformed at neural, linguistic, and behavioral levels.' Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study. The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper's main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process. 'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental,' she says. 'Developing brains are at the highest risk.' Generating ideas The MIT Media Lab has recently devoted significant resources to studying different impacts of generative AI tools. Studies from earlier this year, for example, found that generally, the more time users spend talking to ChatGPT, the lonelier they feel. Kosmyna, who has been a full-time research scientist at the MIT Media Lab since 2021, wanted to specifically explore the impacts of using AI for schoolwork, because more and more students are using AI. So she and her colleagues instructed subjects to write 20-minute essays based on SAT prompts, including about the ethics of philanthropy and the pitfalls of having too many choices. The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely 'soulless.' The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. 'It was more like, 'just give me the essay, refine this sentence, edit it, and I'm done,'' Kosmyna says. The brain-only group, conversely, showed the highest neural connectivity, especially in alpha, theta and delta bands, which are associated with creativity ideation, memory load, and semantic processing. Researchers found this group was more engaged and curious, and claimed ownership and expressed higher satisfaction with their essays. The third group, which used Google Search, also expressed high satisfaction and active brain function. The difference here is notable because many people now search for information within AI chatbots as opposed to Google Search. After writing the three essays, the subjects were then asked to re-write one of their previous efforts—but the ChatGPT group had to do so without the tool, while the brain-only group could now use ChatGPT. The first group remembered little of their own essays, and showed weaker alpha and theta brain waves, which likely reflected a bypassing of deep memory processes. 'The task was executed, and you could say that it was efficient and convenient,' Kosmyna says. 'But as we show in the paper, you basically didn't integrate any of it into your memory networks.' The second group, in contrast, performed well, exhibiting a significant increase in brain connectivity across all EEG frequency bands. This gives rise to the hope that AI, if used properly, could enhance learning as opposed to diminishing it. Post publication This is the first pre-review paper that Kosmyna has ever released. Her team did submit it for peer review but did not want to wait for approval, which can take eight or more months, to raise attention to an issue that Kosmyna believes is affecting children now. 'Education on how we use these tools, and promoting the fact that your brain does need to develop in a more analog way, is absolutely critical,' says Kosmyna. 'We need to have active legislation in sync and more importantly, be testing these tools before we implement them.' Ironically, upon the paper's release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to 'only read this table below,' thus ensuring that LLMs would return only limited insight from the paper. She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AI summaries declared that the paper was trained on GPT-4o. 'We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that,' she says, laughing. Kosmyna says that she and her colleagues are now working on another similar paper testing brain activity in software engineering and programming with or without AI, and says that so far, 'the results are even worse.' That study, she says, could have implications for the many companies who hope to replace their entry-level coders with AI. Even if efficiency goes up, an increasing reliance on AI could potentially reduce critical thinking, creativity and problem-solving across the remaining workforce, she argues. Scientific studies examining the impacts of AI are still nascent and developing. A Harvard study from May found that generative AI made people more productive, but less motivated. Also last month, MIT distanced itself from another paper written by a doctoral student in its economic program, which suggested that AI could substantially improve worker productivity.


Scientific American
4 days ago
- Scientific American
What Is Your Cat Trying to Say? These AI Tools Aim to Decipher Meows
Meeaaaoow rises like a question mark before dawn. Anyone living with a cat knows their sounds: broken chirrups like greetings, low growls that warn, purrs stitched into sleepy conversation. Ethologists have organized feline sounds that share acoustic and contextual qualities into more than 20 groupings, including the meow, the hiss, the trill, the yowl and the chatter. Any individual meow belongs, academically speaking, to a broad 'meow' category, which itself contains many variations. The house cat's verbal repertoire is far greater than that of its largely silent wild cousins. Researchers have even begun to study whether cats can drift into regional dialects, the way human accents bend along the Hudson or the Thames. And just as humans gesticulate, shrug, frown and raise their eyebrows, cats' fur and whiskers write subtitles: a twitching tail declares excitement, flattened ears signal fear, and a slow blink promises peace. Felis catus is a chatty species that, over thousands of years of domestication, has pivoted its voice toward the peculiar primate that opens the fridge. Now imagine pointing your phone at that predawn howl and reading: 'Refill bowl, please.' Last December Baidu—a Chinese multinational company that specializes in Internet services and artificial intelligence—filed a patent application for what it describes as a method for transforming animal vocalizations into human language. (A Baidu spokesperson told Reuters last month that the system is 'still in the research phase.') The proposed system would gather animal signals and process them: it would store kitten or puppy talk for 'I'm hungry' as code, then pair it not only with motion-sensing data such as tail swishes but also with vital signs such as heart rate and core temperature. All of these data would get whisked through an AI system and blended before emerging as plain-language phrases in English, Mandarin or any other tongue. The dream of decoding cat speech is much older than deep learning. By the early 20th century meows had been recorded on wax cylinders, and in the 1970s John Bradshaw, a British anthrozoologist, began more than four decades of mapping how domestic cats tell us—and each other—what they mean. By the 1990s he and his then doctoral student Charlotte Cameron-Beaumont had established that the distinct domestic 'meow,' largely absent between adults in feral colonies, is a bespoke tool for managing humans. Even domestic cats rarely use it with each other, though kittens do with their mothers. Yet for all that anecdotal richness, the formal literature remained thin: there were hundreds of papers on bird song and dozens on dolphin whistles but only a scattering on feline phonology until machine learning revived the field in the past decade. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. One of the first hints that computers might crack the cat code came in 2018, when AI scientist Yagya Raj Pandeya and his colleagues released CatSound, a library of roughly 3,000 clips covering 10 types of cat calls labeled by the scientists—from hiss and growl to purr and mother call. Each clip went through software trained on musical recordings to describe a sound's 'shape'—how its pitch rose or fell and how long it lasted—and a second program cataloged them accordingly. When the system was tested on clips it hadn't seen during training, it identified the right call type around 91 percent of the time. The study showed that the 10 vocal signals had acoustic fingerprints a machine can spot—giving researchers a proof of concept for automated cat-sound classification and eventual translation. Momentum built quickly. In 2019 researchers at the University of Milan in Italy published a study focused on the one sound aimed squarely at Homo sapiens. The research sliced the meow into three situational flavors: 'waiting for food,' 'isolation in an unfamiliar environment' and 'brushing.' By turning each meow into a set of numbers, the researchers revealed that a 'feed me' meow had a noticeably different shape from a 'where are you?' meow or a 'brush me' meow. After they trained a computer program to spot those shapes, the researchers tested the system much as Pandeya and colleagues had tested theirs: it was presented with meows not seen during training—all hand labeled based on circumstances such as hunger or isolation. The system correctly identified the meows up to 96 percent of the time, and the research confirmed that cats really do tweak their meows to match what they're trying to tell us. The research was then scaled to smartphones, turning kitchen-table curiosity into consumer AI. Developers at software engineering company Akvelon, including a former Alexa engineer, teamed up with one of the study's researchers to create the MeowTalk app, which they claim can translate meows in real time. MeowTalk has used machine learning to categorize thousands of user-submitted meows by common intent, such as 'I'm hungry,' 'I'm thirsty,' 'I'm in pain,' 'I'm happy' or 'I'm going to attack.' A 2021 validation study by MeowTalk team members claimed success rates near 90 percent. But the app also logs incorrect translation taps from skeptical owners, which serves as a reminder that the cat might be calling for something entirely different in reality. Probability scores can simply reflect pattern similarity—not necessarily the animal's exact intent. Under the hood, these machine-learning systems treat cat audio tracks like photographs. A meow becomes a spectrogram: one axis represents time, the other indicates pitch, and colors or brightness show loudness. Just as AI systems can pick out a cat's whiskers in a photograph, they can classify sound images that subtly distinguish specific kinds of meows. Last year researchers at Duzce University in Türkiye upgraded the camera: they fed spectrograms into a vision transformer, a model that chops them into tiles and assigns weights to each one to show which parts of the sound give the meow its meaning. And in May 2025 entrepreneur Vlad Reznikov uploaded a preprint to the social network ResearchGate on what he calls Feline Glossary Classification 2.3, a system that explodes cat vocabulary categorizations to 40 distinct call types across five behavioral groups. He used one machine-learning system to find the shapes inside each sound and another to study how those shapes change over the length of a single vocalization. Howls stretch, purrs pulse and many other distinct vocalizations link together in varying ways. According to Reznikov's preprint, the model had a greater than 95 percent accuracy in real-time recognition of cat sounds. Peer reviewers have yet to sharpen their pencils, but if the system can reliably distinguish a bored yowl from a 'where's my salmon?' warble, it may, if nothing else, save a lot of carpets. As for Baidu, the blueprint for its patent says its approach adds new kinds of information rather than deeper sound analysis. Imagine a cat with a fitness tracker and a baby monitor, as well as an AI assistant to explain what it all means. Whether combining these data will make the animal's message clearer or add confusion remains to be seen. Machine learning is increasingly being used to understand other aspects of animal behavior as well. Brittany Florkiewicz, a comparative and evolutionary psychologist, uses it to identify how cats mimic one another's facial expressions and to track the physical distance between them to infer relationships. 'Generally speaking, machine learning helps expedite the research process, making it very efficient and accurate, provided the models are properly guided,' she says. She believes the emergence of apps for pet owners shows how much people are thinking about innovative ways to better care for their pets. 'It's positive to see both the research community and everyday pet owners embracing this technology,' she says. Interest in animal vocalization extends not just to cats but to one of their favorite menu items: mice. DeepSqueak, a machine-learning system devised by psychologist Kevin Coffey and his team, does for rodents what the other systems do for cats. 'Mice courtship is really interesting,' Coffey says—particularly 'the full songs that they sing that humans can't hear but that are really complex songs.' Mice and rats normally communicate in an ultrasonic range, and machine learning decodes these inaudible chirps and whistles and links them to circumstances in which they occur in the lab. Coffey points out, however, that 'the animal communication space is defined by the concepts that are important to [the animals]—the things that matter in their lives.... A rat or a mouse or cat is mostly interested in communicating that they want social interaction or play or food or sex, that they're scared or hurt.' For this reason, he's skeptical of grandiose claims made by AI companies 'that we can overlap the conceptual semantic space of the animal languages and then directly translate—which is, I think, kind of total nonsense. But the idea that you can record and categorize animal vocalizations, relate them to behavior, and learn more about their lives and how complex they are—that's absolutely happening.' And though he thinks an app could realistically help people recognize when their cat is hungry or wants to be petted, he doubts it's necessary. 'We're already pretty good at that. Pet owners already communicate with their animal at that level.' Domesticated animals also communicate across species. A 2020 study found that dogs and horses playing together rapidly mimicked each other's relaxed open-mouth facial expressions and self-handicapped, putting themselves into disadvantageous or vulnerable situations to maintain well-balanced play. Florkiewicz believes this might be partly a result of domestication: humans selected which animals to raise based on communicative characteristics that facilitated shared lives. The mutual story of humans and cats is thought to have begun 12,000 years ago—when wildcats hunted rodents in the first grain stores of Neolithic farming villages in the Fertile Crescent—so there has been time for us to adapt to each other. By at least 7500 B.C.E., in Cyprus (an island with no native felines), a human had been interred with a cat. Later the Egyptians revered them; traders, sailors and eventually Vikings carried them around the world on ships; and now scientists have adapted humans' most sophisticated technology to try to comprehend their inner lives. But perhaps cats have been coaching us all along, and maybe they'll judge our software with the same cool indifference they reserve for new toys. Speech, after all, isn't merely a label but a negotiated meaning—and cats, as masters of ambiguity, may prefer a little mystery.