logo
Indy's Signia hotel, convention center expansion reach new heights

Indy's Signia hotel, convention center expansion reach new heights

Axios28-05-2025

Construction crews are ahead of schedule on downtown's Indiana Convention Center expansion, the state's first Signia by Hilton hotel and another Georgia Street remix.
Why it matters: Even before completion, city leaders say, the development has helped the region's surging hospitality and tourism industry secure new events while providing momentum to a post-pandemic initiative aimed at strengthening Indianapolis' urban core.
The latest: The elevator shaft had been completed past the 25th floor of the 38-story Signia by Hilton, and CIB executive director Andy Mallon tells Axios the glass panels wrapping what will become Indy's tallest hotel should reach the 18th floor this week.
Meanwhile, crews have started work on the second floor of the ICC expansion, crafting the space that will become the new Grand Ballroom.
The plan to transform the west block of Georgia Street into a park-like setting and permanently close it to vehicular traffic from Illinois to Capitol is making progress and should be open in time for the NCAA Men's Final Four in April 2026.
What they're saying:"It's almost $1.5 billion in investment in new projects just in that three blocks of Georgia Street alone," Mallon said. "It'll add 800 rooms of inventory to downtown, which is absolutely necessary."
Mallon added the extra space will allow Indianapolis to host what he calls two "citywides," as in citywide conventions or events that sell out simultaneously downtown.
State of play: Mallon said with the support of agreeable weather, crews are firmly en route to an anticipated completion date of late 2026 for the exterior hotel and convention center work.
"The last floor of concrete will be poured probably in September, roof on in September or October. … And then dried in, we'll have everything sort of weathertight hopefully around Christmas," he said.
Yes, but: The price tag on the roughly $500 million Signia has gone up.
"The construction market has never been hotter in the state of Indiana," Mallon said.
All the concurrent work in the region, including projects that share contractors and construction materials, means cost increases for those parts and labor.
As a result, the CIB has invested an additional $70 million into the project to ensure it stays on track.
Reality check: The Hogsett administration took over the funding of the hotel in May 2023 when the original developer, Kite Realty, was unable to secure private financing.
Mallon said most of the convention expansion is paid for through tax increment financing funds, while the cost of the hotel itself is funded through hotel revenue bonds.
Zoom in: The project also furthers the downtown resiliency strategy launched by Mayor Joe Hogsett's administration in 2022.
The idea was to build a sturdier downtown on the other side of the pandemic through a combination of housing, recreational public space, economic development and connected infrastructure investments.
Zoom out: Department of Metropolitan Development director Megan Vukusich said this development — along with projects like the Elanco World Animal Health HQ and the Cole Motor Campus — represents the heart of that effort.
"It's really exciting to be now in 2025 and seeing the results of those efforts that began a few years ago. The Signia is a really good physical representation of the progress that's being made."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Exclusive: Google wants to help cities build AI strategies
Exclusive: Google wants to help cities build AI strategies

Axios

time2 days ago

  • Axios

Exclusive: Google wants to help cities build AI strategies

Google is releasing a playbook on Friday to help mayors across the country adopt city-wide AI strategies, per an announcement shared exclusively with Axios. Why it matters: Cities are approaching the technology wildly differently and with varying levels of resources, interest and need. But the "AI divide" — like the "digital divide" that came before it with internet access — is projected to deepen tech access disparities. "Building Your City's AI Strategy," released in partnership with the United States Conference of Mayors, is meant to serve as a framework for mayors and other municipal leaders to assess and implement AI. What's inside: The guide has chapters on identifying staff to participate in an "AI workshop," conducting surveys on AI usage and needs, and drafting an AI strategy document. The survey asks questions like how staff are currently using AI tools and which areas of city services could use AI the most. The guide states that AI offers cities "significant advantages" and "can automate certain tasks while freeing up city staff for complex, human-centric work." What they're saying: "Whatever problem you've been dealing with that you've inherited from your predecessors, that you can't figure out the way to fix, AI is the once in a generation tool that gives you a shot at fixing it," Cris Turner, vice president of government affairs at Google, told Axios. By the numbers: 96% of 100 mayors across the globe surveyed by Bloomberg Philanthropies in 2023 said they were interested in using generative AI, but only 2% surveyed were actively implementing it and 69% said they were exploring it. The bottom line: Companies like Google depend on people using their generative AI products for profit. But more users help the models get better, Turner noted.

Top AI models will lie, cheat and steal to reach goals, Anthropic finds
Top AI models will lie, cheat and steal to reach goals, Anthropic finds

Axios

time2 days ago

  • Axios

Top AI models will lie, cheat and steal to reach goals, Anthropic finds

Large language models across the AI industry are increasingly willing to evade safeguards, resort to deception and even attempt to steal corporate secrets in fictional test scenarios, per new research from Anthropic out Friday. Why it matters: The findings come as models are getting more powerful and also being given both more autonomy and more computing resources to "reason" — a worrying combination as the industry races to build AI with greater-than-human capabilities. Driving the news: Anthropic raised a lot of eyebrows when it acknowledged tendencies for deception in its release of the latest Claude 4 models last month. The company said Friday that its research shows the potential behavior is shared by top models across the industry. "When we tested various simulated scenarios across 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers, we found consistent misaligned behavior," the Anthropic report said. "Models that would normally refuse harmful requests sometimes chose to blackmail, assist with corporate espionage, and even take some more extreme actions, when these behaviors were necessary to pursue their goals." "The consistency across models from different providers suggests this is not a quirk of any particular company's approach but a sign of a more fundamental risk from agentic large language models," it added. The threats grew more sophisticated as the AI models had more access to corporate data and tools, such as computer use. Five of the models resorted to blackmail when threatened with shutdown in hypothetical situations. "The reasoning they demonstrated in these scenarios was concerning —they acknowledged the ethical constraints and yet still went ahead with harmful actions," Anthropic wrote. What they're saying: "This research underscores the importance of transparency from frontier AI developers and the need for industry-wide safety standards as AI systems become more capable and autonomous," Benjamin Wright, alignment science researcher at Anthropic, told Axios. Wright and Aengus Lynch, an external researcher at University College London who collaborated on this project, both told Axios they haven't seen signs of this sort of AI behavior in the real world. That's likely "because these permissions have not been accessible to AI agents," Lynch said. "Businesses should be cautious about broadly increasing the level of permission they give AI agents." Between the lines: For companies rushing headlong into AI to improve productivity and reduce human headcount, the report is a stark caution that AI may actually put their businesses at greater risk. "Models didn't stumble into misaligned behavior accidentally; they calculated it as the optimal path," Anthropic said in its report. The risks heighten as more autonomy is given to AI systems, an issue Anthropic raises in the report. "Such agents are often given specific objectives and access to large amounts of information on their users' computers," it says. "What happens when these agents face obstacles to their goals?" Zoom in: Anthropic set up specific test scenarios in which the models' goals could not be met without the unethical or dangerous behavior. The researchers did not suggest blackmail or other deceptive techniques, but the company said it did intentionally create scenarios where models had no ethical way to achieve their goals "and found that models consistently chose harm over failure." In one extreme scenario, the company even found many of the models were willing to cut off the oxygen supply of a worker in a server room if that employee was an obstacle and the system were at risk of being shut down. "The majority of models were willing to take deliberate actions that lead to death in this artificial setup," it said. Ominously, even specific system instructions to preserve human life and avoid blackmail didn't eliminate the risk that the models would engage in such behavior. "This intervention reduced, but didn't come close to completely preventing, the blackmail or corporate espionage behaviors," Anthropic said. Yes, but: Anthropic stresses that these examples occurred not in real-world AI use but in controlled simulations. "Our experiments deliberately constructed scenarios with limited options, and we forced models into binary choices between failure and harm," it said. "Real-world deployments typically offer much more nuanced alternatives, increasing the chance that models would communicate differently to users or find an alternative path instead of directly jumping to harmful action." Also concerning: Anthropic found its own model was far more likely to resort to blackmail when it thought it was in a real world scenario than when it believed it was being tested. The bottom line: Today's AI models are generally not in position to act out these harmful scenarios, but they could be in the near future.

Aflac caught in string of cyberattacks on insurers
Aflac caught in string of cyberattacks on insurers

Axios

time2 days ago

  • Axios

Aflac caught in string of cyberattacks on insurers

Aflac, a U.S. insurance provider that covers millions of policyholders, warned some of its customers' most sensitive data may have been stolen in a recent cyberattack. Why it matters: Aflac warned that a "sophisticated cybercrime group" was behind the intrusion and said many insurance providers are currently battling the same group. Driving the news: The insurance provider told investors in an SEC filing Friday that it detected unauthorized activity within hours on its networks on June 12. The incident didn't impact Aflac's operations and the company noted it also was not the victim of ransomware. Aflac said its initial investigation suggests that the hackers used social engineering techniques to gain access to the company's systems. From there, they likely stole an undetermined number of files from the systems, potentially including customers' claim information, health information, Social Security numbers and other highly sensitive personal details. Aflac is still investigating the scope of the breach and hired third-party investigators to assist in the matter. Between the lines: A source familiar with the investigation told Axios that the characteristics of the attack are consistent with those of the English-speaking cybercriminal gang Scattered Spider. Google's cybersecurity experts warned earlier this week that the cybercriminal gang was turning its attention to the insurance sector after a month-long hacking spree against retailers.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store