How a Startup Plans to Stop Lightning, and OpenAI’s Pentagon Deal

▼ Summary
– Startup Skyward Wildfire aims to prevent catastrophic wildfires by stopping lightning strikes, using a method that appears to involve cloud seeding with metallic chaff.
– The company has raised millions to accelerate development, but researchers highlight significant uncertainties about the method’s effectiveness and environmental impact.
– OpenAI has reached an agreement allowing the US military to use its technologies in classified settings, a deal CEO Sam Altman described as rushed.
– OpenAI asserts the deal includes safeguards against uses like autonomous weapons and mass surveillance, differing from terms another company refused.
– It remains unclear if OpenAI can enforce these safety precautions or if the agreement will satisfy employees who wanted a stronger ethical stance.
A new startup has entered the arena of wildfire prevention with a bold claim: it can stop the lightning strikes that often spark catastrophic blazes. Skyward Wildfire recently secured millions in funding to advance its product and scale operations, though the exact mechanics of its technology remain undisclosed to the public. Available documents point toward a method with historical roots, seeding clouds with metallic chaff, specifically narrow fiberglass strands coated in aluminum. This approach was initially assessed by the U.S. government as far back as the early 1960s. While the potential to prevent fire ignitions is significant, experts caution that numerous questions persist. The effectiveness across diverse atmospheric conditions, the required volume and frequency of material dispersal, and the possible secondary environmental impacts are all areas requiring thorough investigation before widespread deployment can be considered.
In a separate development within the tech sector, OpenAI has finalized an agreement permitting the U.S. military to utilize its artificial intelligence tools in classified environments. Company CEO Sam Altman characterized the negotiations as accelerated, noting they gained momentum only after the Pentagon publicly criticized another AI firm, Anthropic, for its reluctance to engage. OpenAI has been emphatic in stating this is not a blanket authorization for military applications. In a detailed blog post, the company outlined specific prohibitions, including uses for developing autonomous weaponry or enabling mass domestic surveillance. Altman stressed that OpenAI did not merely accept the terms that Anthropic had previously rejected.
Nevertheless, significant challenges loom. Analysts question whether OpenAI can effectively implement the safety measures it has pledged, especially as the military pursues a rapid, politically-charged AI integration strategy amid ongoing international tensions. Additionally, the deal’s reception among the company’s own workforce remains uncertain, as some employees advocated for a firmer stance against military collaboration. Navigating these competing pressures will require careful and deliberate effort from the company’s leadership.
(Source: Technology Review)




