Microsoft Edge 135 Delivers Productivity and Security Updates
Microsoft has released Edge 135, which brings big changes to the company's venerable web browser.
First, it reduces security risks by updating the browser's root certificate store, which is part of the Microsoft Trusted Root Program. Starting April 16, 2025, certificates that validate certain Entrust root certificates will not be trusted by default if their earliest Signed Certificate Timestamp (SCT) is after this date. Enterprises should replace any affected certificates or decide to trust the root certificate locally after evaluating the risks, as reported by Windows Report.
Also, Edge for Business is adding work-related searches in the address bar that direct users to M365.cloud.microsoft instead of Bing.com. Thanks to this, users have easier access to work-related content such as documents and bookmarks and can more easily contact colleagues.
The Work Feed on the enterprise New Tab Page is also getting an overhaul starting in mid-April. Microsoft 365 users eligible for Work Feed will see an updated My Feed that shows productivity tools. You will see recent documents, SharePoint sites, Outlook events, and To-Do tasks. However, content about network activity will no longer appear.
Edge 135 has other new features that improve user experience. For example, Edge Sync lets you keep your data updated across devices. You can now see Bing's trending suggestions in the address bar dropdown on the New Tab Page; administrators can manage this feature using the AddressBarTrendingSuggestEnabled policy.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
30 minutes ago
- Forbes
Six Ways To Advance Modern Architecture For AI Systems
View of the clouds reflected in the curve glass office building. 3d rendering These days, many engineering teams are coming up against a common problem – basically speaking, the models are too big. This problem comes in various forms, but there's often a connecting thread and a commonality to the challenges. Project are running up against memory constraints. As parameters range into the billions and trillions, data centers have to keep up. Stakeholders have to look out for thresholds in vendor services. Cost is generally an issue. However, there are new technologies on the horizon that can take that memory footprint and compute burden, and reduce them to something more manageable. How are today's innovators doing this? Let's take a look. Input and Data Compression First of all, there is the compression of inputs. You can design a loss algorithm to compress the model, and even run a compressed model versus the full one; compression methodologies are saving space when it comes to specialized neural network function. Here's a snippet from a paper posted at Apple's Machine Learning Research resource: 'Recently, several works have shown significant success in training-free and data-free compression (pruning and quantization) of LLMs achieving 50-60% sparsity and reducing the bit-width down to 3 or 4 bits per weight, with negligible perplexity degradation over the uncompressed baseline.' That's one example of how this can work. This Microsoft document looks at prompt compression, another component of looking at how to shrink or reduce data in systems. The Sparsity Approach: Focus and Variation Sometimes you can carve away part of the system design, in order to save resources. Think about a model where all of the attention areas work the same way. But maybe some of the input area is basically white space, where the rest of it is complex and relevant. Should the model's coverage be homogenous or one-size-fits-all? You're spending the same amount of compute on high and low attention areas. Alternately, people engineering the systems can remove the tokens that don't get a lot of attention, based on what's important and what's not. Now in this part of the effort, you're seeing hardware advances as well. More specialized GPU and multicore processors can have an advantage when it comes to this kind of differentiation, so take a look at everything that makers are doing to usher in a whole new class of GPU gear. Changing Context Strings Another major problem with network size is related to the context windows that systems use. If they are typical large language systems operating on a sequence, the length of that sequence is important. Context means more of certain kinds of functionality, but it also requires more resources. By changing the context, you change the 'appetite' of the system. Here's a bit from the above resource on prompt compression: 'While longer prompts hold considerable potential, they also introduce a host of issues, such as the need to exceed the chat window's maximum limit, a reduced capacity for retaining contextual information, and an increase in API costs, both in monetary terms and computational resources.' Directly after that, the authors go into solutions that might have broad application, in theory, to different kinds of fixes. Dynamic Models and Strong Inference Here are two more big trends right now: one is the emergence of strong inference systems, where the machine teaches itself what to do over time based on its past experience. Another is dynamic systems, where the input weights and everything else changes over time, rather than remaining the same. Both of these have some amount of promise, as well, for helping to match the design and engineering needs that people have when they're building the systems. There's also the diffusion model where you add noise, analyze, and remove that noise to come up with a new generative result. We talked about this last week in a post about the best ways to pursue AI. Last, but not least, we can evaluate traditional systems such as digital twinning. Twinning is great for precise simulations, but it takes a lot of resources – if there's a better way to do something, you might be able to save a lot of compute that way. These are just some of the solutions that we've been hearing about and they dovetail with the idea of edge computing, where you're doing more on an endpoint device at the edge of a network. Microcontrollers and small components can be a new way to crunch data without sending it through the cloud to some centralized location. Think about all of these advances as we sit through more of what people are doing these days with AI.


Forbes
an hour ago
- Forbes
GenAI Won't Replace Doctors, But It Could Make Them Miserable
No matter how powerful generative AI becomes, physicians will still have jobs. But will those jobs ... More be fulfilling or soul-crushing? Will AI replace doctors? A year ago, most physicians would've confidently answered 'no.' Medicine, they'd argue, is too complex, too personal, too human to be handled by machines, no matter how advanced. Now, that confidence is starting to waver. Physicians, like other highly educated workers, are watching what's happening in another once-secure, intellectually demanding profession: computer programming. Not long ago, coding was considered one of the most prestigious and future-proof careers in the modern economy. The brightest students pursued software engineering, drawn by high salaries, strong demand and the appeal of solving complex problems. Programmers were irreplaceable. Until they weren't. From Amazon to Meta to Salesforce, tech companies are laying off engineers by the thousands. At Microsoft, generative AI already writes a third of the company's code, and some experts predict it could eliminate two-thirds of programming jobs by decade's end. Companies like Shopify and IBM have gone even further, requiring managers to justify hiring humans over AI or freezing new hires for roles they believe GenAI tools will soon replace. In medicine, large language models already outperform physicians at diagnosing complex cases and answering patient questions. But that doesn't mean clinicians are at risk of losing their jobs. Here are three reasons GenAI won't replace doctors — followed by one major caveat. 1. Too Few Doctors, Too Much Work Across hospitals and clinics, American healthcare is already stretched beyond capacity. The American Medical Association projects a shortfall of up to 124,000 physicians by 2036, including 48,000 in primary care alone. Three major forces are driving this shortage: Bottom line: The physician shortage is real and getting worse. GenAI can help fill the gaps, but it won't eliminate the demand for human clinicians. 2. Cutting Doctors Is A Poor Way To Cut Costs In most industries, replacing high-salaried workers with technology is the fastest path to profitability. But in healthcare, that approach misses the point entirely. Take primary care as an example. It's the backbone of the U.S. medical system, yet it accounts for less than 5% of the nation's $4.9 trillion in healthcare spending. Only half of that percentage goes to salaries. So, even if we eliminated half of all primary care physicians (an unthinkable move), total costs would drop by just 1.25%. In healthcare, the greatest opportunity for cost savings is in preventing and better managing chronic diseases like diabetes, hypertension and long-term heart failure. According to the CDC, improving prevention and chronic disease management could prevent 30–50% of their complications (heart attack, stroke, cancer and kidney failure). Avoiding these catastrophic medical events would save an estimated $1.5 trillion annually. Value-based care models have already demonstrated what's possible. Studies from leading health systems show that investing in proactive, team-based primary care reduces hospitalizations, improves outcomes and lowers annual per-patient costs by up to 23%. That's where the right combination of clinicians and generative AI offers the greatest value. Between visits, GenAI can track symptoms, alert patients to necessary medication changes and identify complications before they turn into crises. Paired with 24/7 telemedicine, GenAI can provide patients with real-time expertise and care for routine concerns, flagging serious problems when doctors aren't normally available. Bottom line: Controlling chronic disease offers 20 times the savings of cutting primary care jobs. Ultimately, the greatest cost reductions will come from better health, and the best way to achieve that is pairing generative AI with skilled clinicians and empowered patients. 3. AI Can't Replace Human Connection Generative AI is becoming more skilled with every update. In a recent dual study, ChatGPT provided answers to routine patient questions originally fielded by doctors. When clinicians and patients reviewed the anonymized responses, the AI was rated better than physicians in both quality and empathy. But when the stakes are high — life-altering cancer diagnoses or complex treatment decisions —patients want a trusted human at their side. Abraham Verghese, my colleague at Stanford and a bestselling author, notes: 'Medicine at its heart is a human endeavor … the physician‑patient relationship is key; all else follows from it.' He emphasizes the ritual of the physical exam as transformative, a structured encounter that seals trust, communicates care and calms fear. Studies show that the doctor's touch reduces anxiety, boosts patient satisfaction, and even improves clinical outcomes. Bottom line: Even when data show that generative AI is more accurate, patients still want to talk with a human when facing complex or life-threatening decisions. A Caution Against Complacency No matter how powerful generative AI becomes, physicians will still have jobs. But will those jobs be fulfilling or soul-crushing? That depends on what doctors do next. If private equity firms or for-profit health insurers determine how GenAI is integrated into medicine, the technology will be used primarily to increase productivity: faster diagnoses, shorter visits, less support staff. Yes, technology can streamline tasks. But unless clinicians shape its deployment, GenAI will be used primarily to drive productivity, making today's problems worse for both clinicians and patients. By contrast, if physicians take the lead, they can harness generative AI to improve patient health, reduce burnout and lower costs by preventing complications like heart attacks, strokes, cancer and kidney failure. But that success will require more than technological tools. Doctors must organize into high-performing medical groups, integrate GenAI into all aspects of clinical care and negotiate payment models that reward improved outcomes — not just higher volume. Bottom line: Bottom line: GenAI can cut corners or improve care, but not both. It can boost profits or improve lives, but not both. The path we take will depend on who takes the lead.


Newsweek
an hour ago
- Newsweek
'Always On,' How Workers Are Suffering From 'Infinite' Work
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Though "Infinite Workday," might sound like the title of a sci-fi film, it's a reality for many Americans, according to a recent report from Microsoft. The tech giant released their 2025 Work Trend Index Annual Report this week, which highlighted the relentless nature of the modern workday. Newsweek spoke to the experts to find out more about the "infinite workday," and how they are impacting Americans. The Context The phrase infinite workday refers to being constantly connected to work, from dawn until late at night. A spokesperson for Microsoft told Newsweek that "The infinite workday perfectly speaks to how we all feel. Work has reached peak inefficiency, and we can't look away." Composite image of a stressed worker, a clock, a laptop and a note reading, "Back to work." Composite image of a stressed worker, a clock, a laptop and a note reading, "Back to work." Photo-illustration by Newsweek/Getty/Canva What To Know Microsoft reported that the average employee receives 117 work emails each day, 153 Teams messages each day, has 2 minutes between interruptions (be it a meeting, call or message) and that 57 percent of meetings are called in the moment and do not have a calendar invite. In an email shared with Newsweek, a Microsoft spokesperson said that U.S. users average 155 chat messages per person each day, which is just above the global average. U.S. workers averaged 155 chat messages per person per day—just above the global average of 153. For email, U.S. workers send an average of 120 emails per person per day, which again is just above the global average. The intensity of the workday comes at a time when workplace satisfaction is increasingly low. In May of 2025, Glassdoor released their Employee Confidence Index and found that only 44 percent of U.S. workers feel optimistic about their company's prospects—the lowest reading ever recorded. Gallup meanwhile reported in a 2024 that employee engagement was at a 10-year low, with enthusiasm and involvement both dropping sharply. Meanwhile, The State of the Workforce Report from MeQuilibrium, which analyzed findings from 5,477 employees across various industries, found that 35 percent of employees feel worse about their work situation and 49 percent feel worse about their finance. Why Is Work Stress So Prevalent in America? Though Microsoft's study is not country specific, the problem of the infinite workday is a pervasive one for Americans. According to data from the Bureau of Labor Statistics, the average working week for all employees, including part time employees in private industries as of 2022 was 34.5 hours. Though the Fair Labor Standards Act sets a standard workweek of 40 hours, for most U.S. workers, there is no federal limit on how many hours you can work in a week. Newsweek spoke to Juliet Schor author of Four Days a Week: The Life-Changing Solution for Reducing Employee Stress, Improving Well-Being and Working Smarter. "U.S. workers have longer hours than people in other high-income countries," she told Newsweek via email. As for the factors driving this, Schor pointed to a "lack of legal protections to turn off devices, high numbers of companies with outsourced teams so there's a need to work across time zones, weak levels of unionization, long hours culture and high health care costs borne by employers." Newsweek also spoke to Ellen Ernst Kossek, distinguished professor emerita of management at Purdue University, who said that U.S. culture itself, "Really emphasizes work," and that "The U.S. identity is linked really heavily to work." She highlighted the right to request flexible working and right to disconnect laws in other countries like the U.K. and said that by comparison the U.S. is more "always on," and that there is an expectation to be online. Vili Lehdonvirta, professor of technology Policy in the Department of Computer Science, Aalto University, Finland, echoed this point. "In many sectors, like technology and finance, there is an expectation that workers should be available to their employers also outside formal working hours, and this norm is probably stronger in the U.S. than in many places in Europe." Lehdonvirta pointed to different technology adaptations and urban planning as playing a potential role in this. He said that mobile devices like Slack and Microsoft Teams makes "always-on culture easier to enact in practice." Speaking to Newsweek over email, Stewart Friedman, emeritus practice professor of management at the Wharton School at the University of Pennsylvania, said, "Norms about boundaries between work and the rest of life vary across countries and they are resistant to change." He said that though people in the U.S. work longer than those in Europe, they are "less burdened," by work than people in South Korea or Japan. "The values underlying national or regional cultures play a big role in determining expectations about the parts of life to which we allocate our attention." How 'Always On' Work Culture Negatively Impacts Employees We know that workers are indeed always on, but how is this impacting them? For Schor, the risks are clear. "Workers burn out, have health problems and as a result do lower quality work and are more likely to quit," she said Lehdonvirta told Newsweek, "Studies suggest that workers in an always-on work culture experience more work-home-interference, fatigue, and other negative consequences." A 2019 study from Myers-Briggs surveyed 1,000 people about always-on culture and found that people who were able to access calls and emails for work outside of hours were more engaged in their job, but more stressed. The study found that 28 percent of always on employees said they couldn't mentally switch off, while 20 percent reported mental exhaustion. According to Lehdonvirta, the consequences of this vary. "Worker-controlled flexibility over when to carry out duties can even be a positive thing for combining work with other commitments. Organizational culture and the behavior of supervisors as role models matters," he said. "People do have different styles of working," Kossek said, noting that people may work out of hours to enable taking breaks at other times in order to help balance work-life responsibilities. "There is a risk to working odd hours," Kossek said, noting that "We can make unhealthy choices," such aa checking emails on weekends or vacations when it's not an emergency. Kossek highlighted that workers are also bringing the job home with them. "Think about two hands going back and forth, representing emails and texts going into crossing borders into home, home into work," she said, There is a "high pattern of integration here," Kossek said, and likened the amalgamation of work and home life to trying to text while driving. The Entry of Artificial Intelligence Microsoft's report comes as the world of work is being rapidly changed by the increasing prevalence of Artificial Intelligence. AI is a polarizing topic—some liken it to a new industrial revolution, while others are sounding the alarm on ethical and environmental concerns. But how will it impact the workplace? Will this new technology rebalance the rhythm of the working day, or will it hit the gas pedal on an already unsustainable work pace? A spokesperson for Microsoft told Newsweek "At a time when nearly every leader is trying to do more with less, we have a real opportunity—not to speed up a broken system, but to refocus on the 20 percent of work that drives 80 percent of the impact, to reorganize into flatter, more agile teams, and to pause long enough to learn how to use AI—not just to support the work, but to transform it." Schor though, said that "AI can go either way." "It can lead to job stress, unemployment and higher productivity requirements. But it can also be a way to enhance productivity," she said. Lehdonvirta shared a similar sentiment. "It depends entirely on what they can do," he said, adding that if these tools "genuinely help people," to off-load tasks then they could help to achieve "sustainable working styles." However, "If they become yet another notification that interrupts you, or yet another inbox that needs to be dealt with, then the consequences may be different." Friedman told Newsweek, "To the extent that AI tools give greater freedom and flexibility in determining how we allocate our attention to the people and projects about which we care the most, then they can be useful in helping us produce greater harmony and impact as leaders in all the different parts of our lives." What's Next The workforce is rapidly changing, but more change may need to come to tackle always on culture. "We have to come up with new norms for managing, when we're on and when we're off work and new ways of communicating," Kossek said. Schor said, "When workloads increase, reducing hours can often make it easier to do all the work," this is because "people are most rested and less burned out." A good work life balance is key in this, but it takes commitment. "People are trying to be great employees, but also have a rich personal life," Kossek said. Friedman told Newsweek that "learning how to manage boundaries between different parts of life," like "work, home, community," is possible. But "it takes conscious effort and continual experimentation."