AI Ecosystem

Inside Smack Technologies: The Startup Building AI Models Designed for the Battlefield

โšก Quick Summary

  • Smack Technologies raised $32 million to build battlefield-specific AI models using reinforcement learning
  • Founded by ex-Marines and a former Tinder VP, the startup bridges military expertise with Silicon Valley tech
  • Company claims its models will surpass Claude in military operational planning capabilities
  • The funding signals growing investor comfort with dedicated defence AI despite ongoing ethical debates

Inside Smack Technologies: The Startup Building AI Models Designed for the Battlefield

While major AI companies wrestle with the ethical boundaries of military applications, a well-funded startup led by former US Marines is charging ahead with purpose-built combat planning models โ€” and it just secured $32 million to accelerate its mission.

What Happened

Smack Technologies, a defence-focused artificial intelligence startup, announced a $32 million funding round this week, signalling significant investor confidence in dedicated military AI systems. The company, founded by former Marine Forces Special Operations Commander Andy Markoff alongside fellow ex-Marine Clint Alanis and former Tinder VP of Technology Dan Gould, is developing AI models specifically trained to plan and execute military operations.

๐Ÿ’ป Genuine Microsoft Software โ€” Up to 90% Off Retail

Unlike the general-purpose large language models produced by companies such as OpenAI and Anthropic, Smack's approach is fundamentally different. The company trains its models through a reinforcement learning process reminiscent of DeepMind's AlphaGo, running them through simulated war game scenarios where expert military analysts provide feedback on whether chosen strategies would succeed in real-world conditions. Markoff claims the resulting models will soon surpass Claude's capabilities when applied to operational military planning โ€” a bold assertion that underscores the growing divergence between civilian and defence AI development.

The announcement comes at a particularly charged moment. Anthropic recently drew attention for expressing reservations about providing unfettered military access to its models, highlighting a fundamental tension in the AI industry between commercial opportunity and ethical guardrails. Smack Technologies appears far less conflicted, with Markoff arguing that ethical deployment responsibility belongs squarely with uniformed military personnel, not technology companies.

Background and Context

The relationship between Silicon Valley and the Pentagon has been fraught for years. Google famously withdrew from Project Maven in 2018 after employee protests over using AI for drone targeting. Since then, the landscape has shifted considerably, with more technology companies warming to defence contracts amid geopolitical tensions and government investment programs. The Department of Defence has steadily increased its AI spending, and initiatives like the Replicator programme aim to field autonomous systems at scale.

Smack Technologies fits into a growing ecosystem of dedicated defence AI startups, joining companies like Anduril, Shield AI, and Palantir in building technology explicitly designed for military use. What distinguishes Smack is its training methodology โ€” using reinforcement learning with human expert feedback rather than simply fine-tuning existing commercial models. This approach mirrors techniques that proved revolutionary in game-playing AI and suggests a path toward systems that can evaluate complex tactical scenarios with nuance that general-purpose models may lack.

The company's leadership profile is also notable. Having a CEO who personally planned and executed special operations brings domain expertise that pure technologists cannot replicate, potentially giving Smack's models a practical edge in understanding the messy realities of battlefield decision-making. Businesses running enterprise productivity software in secure government environments will recognise the importance of purpose-built solutions over generic alternatives.

Why This Matters

This story matters because it crystallises a defining question for the AI industry in 2026: who gets to decide how artificial intelligence is used in warfare? The debate is no longer theoretical. With $32 million in fresh capital, Smack Technologies represents a concrete answer from one faction โ€” build dedicated military AI and let the armed forces govern its ethical deployment.

The implications extend far beyond a single startup. If purpose-built military AI models prove more capable than adapted civilian systems, it could create a permanent bifurcation in the AI landscape. Defence-specific models trained on classified scenarios and validated by operational experts would occupy a space that general-purpose AI companies simply cannot access, regardless of their technical sophistication. This dynamic could reshape how governments procure AI technology and which companies ultimately dominate the defence AI market.

For the broader technology industry, Smack's success in fundraising signals that investor appetite for defence AI remains robust despite ongoing ethical debates. The venture capital community is increasingly comfortable backing companies that build explicitly for military applications, suggesting the 2018-era stigma around Pentagon contracts has largely dissipated.

Industry Impact

The $32 million raise positions Smack Technologies as a serious contender in the defence AI space, but its ripple effects will be felt across multiple sectors. Major AI companies like Anthropic, OpenAI, and Google DeepMind will face renewed pressure to articulate clear policies on military use โ€” either embracing defence applications or ceding that market to purpose-built competitors.

For the defence industry itself, Smack's reinforcement learning approach could set a new standard for how military AI systems are trained and validated. Traditional defence contractors like Lockheed Martin, Raytheon, and Northrop Grumman have been building their own AI capabilities, but a startup with a fundamentally different training methodology could leapfrog legacy approaches. The emphasis on expert-in-the-loop training, where military analysts actively shape model behaviour through war game feedback, addresses one of the core concerns about autonomous military systems: ensuring human judgment remains central to operational planning.

The geopolitical implications are equally significant. China has been investing heavily in military AI, and Russia has stated ambitions for autonomous combat systems. The emergence of well-funded US startups building dedicated military AI suggests an accelerating arms race in artificial intelligence, one where the distinction between offensive and defensive capabilities may become increasingly blurred. Organisations managing sensitive operations need reliable tools โ€” from a genuine Windows 11 key for secure workstations to purpose-built AI for mission planning.

Expert Perspective

CEO Andy Markoff's framing of the ethical question โ€” that military personnel who swear oaths should bear responsibility for ethical AI deployment, not technology companies โ€” represents a philosophically distinct position in the debate. This view resonates with the defence establishment but sits uncomfortably with many AI researchers who argue that building inherently dangerous systems creates moral obligations for their creators, not just their users.

The comparison to AlphaGo's training methodology is instructive. AlphaGo succeeded by playing millions of games against itself and learning from expert feedback, eventually surpassing human capabilities in a domain previously thought to require uniquely human intuition. If Smack can replicate this approach for operational military planning, the implications for command-and-control structures could be transformative โ€” though the consequences of errors in this domain are measured in lives rather than lost games.

What This Means for Businesses

For businesses in the defence supply chain, Smack's emergence signals that dedicated military AI platforms will increasingly complement and potentially compete with adapted civilian tools. Companies providing IT infrastructure and software to defence organisations should anticipate growing demand for AI-compatible systems, secure computing environments, and platforms capable of handling classified model training.

Beyond defence, the bifurcation between civilian and military AI could influence how companies across sectors think about purpose-built versus general-purpose AI solutions. Just as businesses often need an affordable Microsoft Office licence rather than a cobbled-together set of free tools, the defence sector may increasingly favour bespoke AI built specifically for operational requirements.

Key Takeaways

Looking Ahead

Smack Technologies' trajectory will serve as a bellwether for the entire defence AI sector. If its reinforcement learning approach delivers models that genuinely outperform adapted civilian AI in military planning scenarios, expect a wave of similar purpose-built defence AI startups to attract significant funding. The company's next major milestone will likely be demonstrating its models in formal military evaluations, where performance against real-world scenarios will determine whether the reinforcement learning approach can deliver on its ambitious promise. The broader question โ€” whether AI should be designed specifically for warfare โ€” will continue to divide the technology industry, but with each funding round, the practical answer becomes clearer.

Frequently Asked Questions

What is Smack Technologies building?

Smack Technologies is developing AI models specifically trained to plan and execute military operations, using reinforcement learning with expert military analyst feedback rather than adapting general-purpose AI systems.

How does Smack's AI training differ from other AI companies?

Unlike companies that fine-tune general models, Smack uses a reinforcement learning approach similar to AlphaGo, running models through war game simulations with military expert feedback to teach optimal operational planning.

Why is military AI controversial in Silicon Valley?

The debate centres on whether technology companies bear moral responsibility for building AI designed for warfare, with some companies like Anthropic expressing reservations about unfettered military access while startups like Smack argue ethical deployment should be managed by military personnel.

military AISmack Technologiesdefense technologyartificial intelligencenational security
OW
OfficeandWin Tech Desk
Covering enterprise software, AI, cybersecurity, and productivity technology. Independent analysis for IT professionals and technology enthusiasts.