The U.S. Secretary of Defense must personally negotiate with a private company to ensure its product agrees to serve in war. Meanwhile, German public media fabricates reality, and sensors in the Reichstag trigger an alarm because of floor cleaner.

Office-Style Rebellion of the Machines. The world's most powerful military has found itself in a state of unprecedented dependency. Secretary of Defense Pete Hegseth summoned Dario Amodei, CEO of Anthropic, to the Pentagon—not to issue an order, but to negotiate terms of software use. The dispute concerns the Claude model, which—as reported by Axios—is the only AI system operating within the U.S. military's classified networks. The Department of Defense has hit a wall in the form of „guardrails”—ethical blocks hardcoded into the system by its creators.

This situation exposes a new power dynamic in digital geopolitics. A company founded in 2021 by former OpenAI employees is dictating terms to a superpower. The Pentagon, frustrated by the resistance of a pacifist algorithm, is threatening to designate Anthropic as a supply chain risk. This is a category usually reserved for Chinese tech giants, not Silicon Valley partners. The threat is an act of desperation; the military needs Claude for intelligence tasks, but the tool refuses to cooperate to the extent the generals expect.

In the background of this conflict, Palantir Technologies is trying to forge an alliance that would allow the model to be „operationalized” within defense systems. Tuesday's meeting at the Pentagon is a trial of strength: can the state force private technology to abandon its moral safety switches in favor of combat effectiveness? Hegseth and Amodei enter the negotiation room from fundamentally opposing positions. The outcome of this conversation will define whether, in future conflicts, AI will be a soldier or the commander's conscience. „Claude is the only AI model available in the military's classified systems, and the most capable model for sensitive defense and intelligence work.” — AxiosRelations between Silicon Valley and the Pentagon have been strained since 2018's „Project Maven,” when Google employees protested the use of their AI for drone imagery analysis. Anthropic, positioning itself as a „safe” alternative to OpenAI, has written ethical constraints into its product's DNA, which is now becoming an operational barrier for the defense sector.Fiction in the News Service. While the military fights to make AI work, the media is fighting the consequences of AI working too creatively. The German broadcaster ZDF has apologized for airing footage in „heute journal” on February 15, 2026. Viewers watching a report on the American ICE agency saw not reality, but a digital hallucination. The editorial team used generated video without any labeling, which deputy editor-in-chief Anne Gellinek called a „double mistake.”

This incident undermines the credibility of public media at a time when it is most needed. Nathanael Liminski, Media Minister of North Rhine-Westphalia, rightly noted that trust is a station's most valuable capital. Meanwhile, editors, instead of verifying facts, reached for easily accessible, synthetic images. This is not a technical error; it is a cognitive one. News journalism that replaces gritty reality with clean graphics from a generator ceases to function as a witness.

The paradox is that in both cases—the Pentagon and ZDF—the technology acted contrary to the user's intentions. In the U.S., the algorithm is too moral to kill. In Germany, the algorithm is too eager to confabulate to inform. In both scenarios, the human factor failed in supervising a tool that was supposed to streamline work but instead became a source of reputational and operational crisis.A Cold Shower of Reality. While institutions grapple with virtual problems, physical reality is brutally verifying human hubris. In Austria and the Alps, winter killed five people within 48 hours. In Tyrol alone, emergency services recorded over 30 avalanche interventions. In St. Anton am Arlberg, citizens of the USA, Poland, and Austria perished. No predictive algorithms stopped the masses of snow from descending on skiers.

Technological infrastructure proved helpless against 20 centimeters of snow at Vienna-Schwechat airport. 150 flights were canceled, grounding 13,000 passengers. In Slovenia, 34,000 households were cut off from power. This is a reminder of the hierarchy of needs: before we start worrying about the ethics of artificial intelligence, we must ensure the power grid functions and roads remain passable.

This contrasts with events in Berlin, where ultra-modern security systems in the Reichstag caused panic because of a cleaning agent. At 5:15 AM, 80 firefighters rushed into action to neutralize a threat that turned out to be detergent fumes. Thus, we have systems that react with hysteria to cleaning, and nature that paralyzes entire nations despite the existence of snowplows and weather radars. 150 — number of flights canceled in Vienna due to snowfall The argument from proponents of full automation is that errors are inherent to progress. They claim the Pentagon must force Anthropic into submission to maintain an edge over China, and the ZDF blunder is just a transitional phase in adapting to new tools. They point out that without advanced detection systems in the Reichstag, a real threat could be overlooked.

However, the facts of the last two days contradict this optimistic vision. Dependence on systems we do not understand or control creates new vectors of risk. If Claude is the only model in secret U.S. systems, then American defense has a „single point of failure” dependent on the whim of a civilian board. If public media cannot distinguish truth from falsehood in their own materials, they become unwitting agents of disinformation. Technology, instead of strengthening institutions, begins to corrode them from within.

The meeting at the Pentagon may end in a forced settlement, but the problem will remain. We are building systems based on trust in code that increasingly proves incompatible with reality—whether on the battlefield, in a TV studio, or on a snowy slope in Tyrol. The real danger lies not in machines taking power, but in us surrendering it voluntarily before they learn to distinguish floor cleaner from chemical weapons.