The Ineducable Meets the Inevitable

It’s really rather remarkable that they genuinely didn’t see this coming:

I’m about as far left as you can get… but we do have problems with MAiD in Canada. How do I know? It was “offered” to me in lieu of care. I’m disabled, I was alone, my conditions expensive.

Yes I was allowed to say “No”, but no alternative care was offered. That’s coercion.

If you’re dumb enough to support both a) centralized government health care and b) government-sponsored euthanasia, you deserve exactly what you’re going to get.

It’s not going to be long before people like her aren’t allowed to say no.

As the SG poster rather memorably put it, when you vote for the leopard face-eating party, you really shouldn’t be too surprised when the leopards for whom you voted start eating faces.

DISCUSS ON SG


Veriphysics: The Treatise 021

IV. The Collapse of Materialism in Physics

The Enlightenment’s metaphysics was materialist at its core. The universe was matter in motion, governed by deterministic laws, fully explicable in principle by the methods of physics. Mind was either reducible to matter or an epiphenomenal shadow cast by material processes. Purpose, meaning, and value were projections onto a universe that contained none of them intrinsically. The goal of science was to complete the mechanical picture, to fill in the remaining gaps, to achieve the God’s-eye view that would render everything transparent to human understanding.

The twentieth century destroyed this picture from within. The destruction came not from theology or philosophy but from physics itself, from the very science that was supposed to complete the materialist vision.

Quantum mechanics revealed that the foundations of matter are not mechanical. At the subatomic level, particles do not have definite positions and momenta until measured; they exist in superpositions of states, described by probability amplitudes rather than determinate values. The Heisenberg uncertainty principle is not merely a limitation on our knowledge; it is a feature of reality itself. The universe, at its most fundamental level, is not a clockwork. It is something stranger, less determinate, more resistant to complete specification than the Enlightenment ever imagined.

Niels Bohr’s Copenhagen interpretation forced an even more troubling conclusion: the act of observation affects what is observed. The measurement problem—the question of how and why quantum superpositions collapse into definite states when measured—remains unsolved after a century of effort. Consciousness cannot be eliminated from the foundations of physics. The materialist program aimed to explain mind in terms of matter; quantum mechanics suggested that matter, at the deepest level, cannot be fully described without reference to mind. The observer is not a passive recorder of an independently existing reality; the observer is implicated in the constitution of what is observed.

Cosmology delivered further blows. The confident materialism that claimed to explain everything has discovered that it cannot account for most of what exists. Approximately ninety-five percent of the universe consists of “dark matter” and “dark energy” which are simply names for our ignorance, placeholders for phenomena we can detect only by their gravitational effects but cannot observe, explain, or integrate into our existing theories. The visible universe of everything we can see, touch, measure, analyz is merely a thin film on an ocean of darkness. The Enlightenment promised illumination; physics has discovered that we inhabit a cosmos mostly opaque to our inquiry.

The multiverse hypothesis represents the final confession of materialist bankruptcy. Confronted with the fine-tuning of physical constants and the fact that the parameters of our universe appear exquisitely calibrated to permit the existence of complex structures, life, and consciousness, materialists found themselves facing a dilemma. The fine-tuning seemed to point toward purpose, design, intention. To avoid this conclusion, some physicists proposed that our universe is one of infinitely many, each with different constants, and we naturally find ourselves in one compatible with our existence. The “multiverse” explains everything and therefore nothing. It is unfalsifiable by design and no observation could ever confirm or refute it. It posits more unobservable entities than observable ones. It is not science but metaphysics, and bad metaphysics at that: an ad hoc construction designed to avoid the obvious implication of the evidence.

The obvious implication is what the Christian tradition always maintained: material reality is not self-sufficient. The visible depends on the invisible. The natural participates in the supernatural. Creation reflects Creator. The mechanical universe was a brief hallucination, sustained for three centuries by the momentum of technological success and the institutional capture of intellectual life. The mysterious universe, saturated with indeterminacy, opaque to final explanation, pointing beyond itself to what transcends it, is what we actually inhabit.

This is not a “God of the gaps” argument, inserting divinity wherever science has not yet reached. It is the exact opposite: the recognition that the gaps are not temporary deficiencies to be filled by future research but the structural features of creaturely knowledge. We see as though through a glass, darkly, not because the glass could be replaced by something clearer, but because we are creatures and not Creator. The darkness is not a problem to be solved but a condition to be acknowledged. Humility about our limits is not skepticism; it is the precondition of genuine knowledge.

You can now buy the complete Veriphysics: The Treatise at Amazon in both Kindle and audiobook formats if you’d like to read ahead or have it available as a reference. 

DISCUSS ON SG


2 Billion Generations of Nothing

It was just remarkable, with this evolutionary distance, that we should see such coherence in gene expression patterns. I was surprised how well everything lined up.

—Dr. Robert Waterston, co-senior author, Science (2025)

If one wanted to design an experiment to give natural selection the best possible chance of demonstrating its creative power, it would be hard to improve on the nematode worm.

Caenorhabditis elegans is about a millimeter long and consists of roughly 550 cells. It has a generation time of approximately 3.5 days. It produces hundreds of offspring per individual. Its populations are enormous. Its genome is compact—about 20,000 genes, comparable in number to ours but without the vast regulatory architecture that slows everything down in mammals. The worms experience significant selective pressure: most offspring die before reproducing, which means natural selection has plenty of raw material to work with. And critically, worms have essentially no generation overlap. When a new generation hatches, the old generation is dead or dying. Every generation represents a complete turnover of the gene pool. There is no drag, no cohort coexistence, no grandparents competing with grandchildren for resources.

In the notation of the Bio-Cycle Fixation Model, the selective turnover coefficient for C. elegans is approximately d = 1.0. Compare that to humans, where we have shown d ≈ 0.45. The worm is running the evolutionary engine at full throttle. No brakes, no friction, no generational overlap gumming up the works.

Now consider the timescale. C. elegans and its sister species C. briggsae diverged from a common ancestor approximately 20 million years ago. At 3.5 days per generation, that is roughly two billion generations. To put that in perspective, the entire history of the human lineage since the putative chimp-human divergence—six to seven million years at 29 years per generation—amounts to something like 220,000 generations. The worms have had nearly ten thousand times as many generations to diverge. Ten thousand times.

Two billion generations, running the evolutionary engine at maximum speed, with enormous populations, high fecundity, complete generational turnover, and all the raw material that natural selection could ask for. If there were ever a case where the neo-Darwinian mechanism should produce spectacular results, this is it.

So what did it produce? Nothing.

In June 2025, a team led by Christopher Large and co-senior authors Robert Waterston, Junhyong Kim, and John Isaac Murray published a landmark study in Science comparing gene expression patterns in every cell type of C. elegans and C. briggsae throughout embryonic development. Using single-cell RNA sequencing, they tracked messenger RNA levels in individual cells from the 28-cell stage through to the formation of all major cell types—a process that takes about 12 hours in these organisms.

What they found is what Dr. Waterston described, with evident surprise, as “remarkable coherence.” Despite 20 million years and two billion generations of evolution, the two species retain nearly identical body plans with an almost one-to-one correspondence between cell types. The developmental program—when and where each gene turns on and off as the embryo develops—has been conserved to a degree that startled even the researchers.

Gene expression patterns in cells performing basic functions like muscle contraction and digestion were essentially unchanged between the two species. The regulatory choreography that builds a worm from a fertilized egg—which genes activate in which cells at which times—was so similar across 20 million years that the researchers could map one species’ cells directly onto the other’s.

Where divergence did occur, it was concentrated in specialized cell types involved in sensing and responding to the environment. Neuronal genes, the researchers noted, “seem to diverge more rapidly—perhaps because changes were needed to adapt to new environments.” But even this divergence was modest enough that Kim, one of the co-senior authors, noted the most surprising finding was not that some expression was conserved—the body plans are obviously similar, so that’s expected—but that “when there were changes, those changes appeared to have no effect on the body plan.”

Read that again. The changes that the mechanism did produce over two billion generations had no detectable effect on how the organism is built. The divergence was, as far as the researchers could determine, functionally trivial.

Murray, the study’s third senior author, offered the most revealing comment of all: “It’s hard to say whether any of the differences we observed were due to evolutionary adaptation or simply the result of genetic drift, where changes happen randomly.”

After two billion generations, the researchers cannot confidently identify a single adaptive change in gene expression. They cannot point to one cell type, one gene, one regulatory switch and say: natural selection did this. Everything they found is equally consistent with random noise.

Now, the standard response to findings like this is to invoke purifying selection, also known as stabilizing selection. The argument goes like this: most mutations are deleterious, so natural selection acts primarily to remove harmful changes rather than to accumulate beneficial ones. Gene expression patterns are conserved because any change to a broadly-expressed gene would disrupt too many downstream processes. The machinery is locked down precisely because it works, and selection fiercely punishes any attempt to modify it.

This is true. Purifying selection is real, well-documented, and no one disputes it. But invoking it as an explanation only deepens the problem for the neo-Darwinian account of speciation.

The theory of evolution by natural selection claims that the same mechanism, random mutation filtered by selection, both preserves existing adaptations and creates new ones. The worm data shows empirically what the constraint looks like. The vast majority of the genome is locked down. Expression patterns involving basic cellular functions are untouchable. The only genes free to diverge are those expressed in a few specialized cell types, and even those changes are so subtle that the researchers can’t distinguish them from genetic drift.

This is the genome’s evolvable fraction, and it is small. The regulatory architecture that controls development, the transcription factor binding sites, the enhancer networks, the chromatin structure that determines which genes are accessible in which cells, is so deeply entrenched that two billion generations of nematode reproduction cannot budge it.

And here’s the question no one asked: how did that regulatory architecture get there in the first place?

If the current architecture is so tightly constrained that it resists modification across two billion generations, then building it in the first place required an even more extraordinary series of changes. Every transcription factor binding site had to be fixed. Every enhancer had to be positioned. Every element of the chromatin landscape that determines which genes are expressed in which cell types had to be established through sequential substitutions. This is what we call the shadow accounting problem. The very architecture now being invoked to explain why the worm hasn’t changed is itself a product that requires explanation under the same model. The escape hatch invokes a mechanism whose existence demands an even larger prior expenditure of the same mechanism—an expenditure that the breeding reality principle tells us was itself problematic.

Let us be precise about the scale of the failure. The MITTENS analysis, as published in Probability Zero, establishes that the neo-Darwinian mechanism of natural selection faces multi-order-of-magnitude shortfalls when asked to account for the fixed genetic differences between closely related species. The worm study provides an independent empirical check on this conclusion from the opposite direction.

Instead of asking “can the mechanism produce the required divergence in the available time?” and discovering that it cannot, the worm study asks “what does the mechanism actually produce when given enormous amounts of time under ideal conditions?” and discovers that the answer is exactly what MITTENS proves: essentially nothing.

Two billion generations with every parameter set to maximize the rate of adaptive change, with short generation times, high fecundity, large populations, complete generational turnover, and a compact genome, nevertheless produced two organisms so similar that researchers can map their cells one-to-one. The divergence that did occur was concentrated in a few specialized cell types and could not be confidently attributed to adaptation.

Now scale this down to the conditions that supposedly produced speciation in large mammals. A large mammal has a generation time of 10 to 20 years. Its fecundity is low, with a few offspring per lifetime instead of hundreds. Its effective population size is small. Its generation overlap is substantial (d ≈ 0.45, meaning that less than half the gene pool turns over per generation). Its genome is vastly larger and more complex, with regulatory architecture orders of magnitude more elaborate than a nematode’s.

The number of generations available for speciation in large mammals is measured in the low hundreds of thousands. The worms had two billion and produced nothing visible. On what basis should we believe that a mechanism running at a fraction of the speed, with a fraction of the population size, a fraction of the fecundity, a fraction of the generational turnover, and orders of magnitude more regulatory complexity to navigate, can accomplish what the worms could not?

The question answers itself.

“The worms are under strong stabilizing selection. Other lineages face different selective pressures that drive divergence.”

No one disputes that stabilizing selection explains the stasis. The problem is what happens when you look at the fraction that isn’t stabilized. Two billion generations of mutation, selection, and drift operating on the unconstrained portion of the genome produced changes that (a) affected only specialized cell types, (b) didn’t alter the body plan, and (c) couldn’t be distinguished from drift. If the creative power of natural selection operating on the evolvable fraction of the genome is this feeble under ideal conditions, it does not become more powerful when you make conditions worse.

“Worms are simple organisms. Complex organisms have more regulatory flexibility.”

This gets the argument backward. Greater complexity means more regulatory interdependence, which means more constraint, not less. A change to a broadly-expressed gene in an organism with 200 cell types is more dangerous than a change to a broadly-expressed gene in an organism with 30 cell types, because there are more downstream processes to disrupt. The more complex the organism, the smaller the evolvable fraction of the genome becomes relative to the locked-down fraction.

“Twenty million years is a short time in evolutionary terms.”

It is 20 million years in clock time but two billion generations in evolutionary time. The relevant metric for evolution is not years but generations, because selection operates once per generation. Two billion generations for a nematode is equivalent, in terms of opportunities for selection to act, to 58 billion years of human evolution at 29 years per generation. That’s more than four times the age of the universe. If the mechanism can’t produce meaningful divergence in the equivalent of four universe-lifetimes, the mechanism obviously doesn’t function at all.

“The study only looked at gene expression, not genetic sequence. There could be extensive sequence divergence not reflected in expression.”

There is sequence divergence, and it’s well-documented. C. elegans and C. briggsae differ at roughly 60-80% of synonymous sites and show substantial divergence at non-synonymous sites as well. The point is that this sequence divergence has not produced meaningful functional divergence. The genes have changed, but what they do and when they do it has remained largely the same. Sequence divergence without functional divergence is exactly what you’d expect from neutral drift operating on a tightly constrained system—and it is exactly the opposite of what you’d expect if natural selection were the creative engine the theory claims it to be.

The Science study is good science. The researchers accomplished something genuinely unprecedented: a cell-by-cell comparison of gene expression between two species across the entire course of embryonic development. The technical accomplishment is significant, and the evidence it produced is highly valuable.

But the data is reaching a conclusion that the researchers are not eager to draw. Two billion generations of evolution, operating under conditions more favorable than any large animal will ever experience, failed to produce any meaningful or functional divergence between two species. The mechanism ran at full speed for an incomprehensible span of time, and the result was the same worm.

This is not a philosophical objection to evolution. It is not an argument from personal incredulity or religious conviction. It is the straightforward empirical observation that the proposed mechanism, given every possible advantage, does not produce the results attributed to it. The creative power of natural selection, when measured rather than assumed, turns out to be approximately zero.

Two billion generations of nothing. A worm frozen in time. That’s what the data shows. And that’s exactly what Probability Zero predicted.

DISCUSS ON SG


You Can Be Effectively Smarter

I estimate that if you use AI correctly, you can augment your effective applied intelligence by about 1.5 SD. That’s about 24 IQ points. I ran some of my recent projects, augmented and non-augmented, past 5 AI models, and they all produced results in much the same range. You can read the results of one of them at AI Central.

Obviously, your mileage will vary. And note that this has nothing to do with the quantity of the output, only the caliber of it.

However, if you’re going to use AI as a mirror, or to pat you on the head and tell you how brilliant you are, there is nothing there to augment, you are wasting your time, and you might as well just watch television.

DISCUSS ON SG


Veriphysics: Triveritas vs Trilemma

So yesterday, I posted about the Agrippan Trilemma, also known in its modern formulation as the Münchhausen Trilemma, which is considered a significant philosophical device that has successfully asserted how any attempt to justify knowledge leads to one of three unsatisfactory outcomes: circular reasoning, infinite regress, or dogmatic assertion. A number of you agreed that this was a worthy challenge that would provide a suitable test for the epistemological strength of the Triveratas.

And while the purpose of Veriphysics is not to expose the flaws in ancient or modern philosophy, as it happens, the Triveritas is not only the first epistemological system to be able to defend itself successfully from the Trilemma, but in the process of defending the Triveritas from it, Claude Athos and I identified a fundamental flaw in the Trilemma itself that renders it invalid and falsifies its claims to universality.

So, if you are philosophically inclined, I invite you to read a Veriphysics working paper that both solves the Trilemma for the first time in nearly 2,000 years while additionally demonstrating its invalidity.

Solving the Agrippan Trilemma: Triveritas and the Third Horn

The Agrippan Trilemma holds that any attempt to justify a claim must terminate in infinite regress, circularity, or dogmatic stopping. No major epistemological framework has solved it; each concedes one horn. This paper solves the Trilemma by demonstrating that the Triveritas survives all three horns, identifying an amphiboly in the third horn that renders the argument invalid, and providing a counterexample that falsifies the Trilemma’s claim to universality. The Trilemma’s third horn rests on an amphiboly: it conflates “terminates” with “terminates arbitrarily,” treating the two as logically equivalent. They are not. The Triveritas, which requires the simultaneous satisfaction of three independently necessary epistemic conditions (logical validity, mathematical coherence, and empirical anchoring), terminates at three stopping points of fundamentally different kinds, each checked by the other two. The probability of error surviving all three checks is strictly less than the probability of surviving any one; this is proved mathematically and confirmed empirically across twelve historical cases. Termination that is independently cross-checked across three dimensions is not arbitrary. It is not dogmatic. And it is not the same epistemic defect the Trilemma identifies. The third horn breaks because the Trilemma never distinguished checked termination from unchecked termination, and that distinction is the one upon which the entire Trilemma and its claim to universality depend.

DISCUSS ON SG


The Art of War in the Taiwan Strait

The USS Abraham Lincoln has been in the Arabian Sea since January 26. The Gerald R. Ford transited Gibraltar on February 20. Thirteen Aegis destroyers, 600-plus Tomahawks in single-salvo capacity, 500 aircraft spread across bases from Jordan to Qatar—the largest American force concentration in the Middle East since 2003. Every analyst in Washington is writing about the coming air campaign against Iran. None of them are writing about what matters, which is that Beijing is using this spectacular distraction to take Taiwan without an amphibious landing, without a naval engagement, and without a shot fired.

To understand why the Iran crisis is a feature and not a bug from the Chinese strategic perspective it is first, necessary to understand what actually happened in June 2025, as opposed to what the censors convinced the media happened.

The air superiority story was real. Israeli F-35s and F-15s operated with impunity over Iran. The IRIAF’s fleet of pre-1979 American hand-me-downs was irrelevant. Israel struck 1,480-plus targets and the B-2s hit Fordow, Isfahan, and Natanz. This is not in dispute.

What is has mostly been suppressed is the cost of defending against Iran’s response. Iran launched roughly 550 ballistic missiles and over 1,000 drones during the Twelve-Day War. The official “90% interception rate” is a masterwork of selective statistics: it describes the success rate of attempted intercepts. Al Jazeera’s analysis found that of 574 missiles, only 257 were engaged at all. The remaining 317 were never intercepted. Of the 257 attempts, 201 succeeded, 20 partially, 36 failed.

The damage to Israel, the extent of which is still under military censorship, included a direct hit on the Kirya military headquarters in Tel Aviv that rendered Netanyahu’s office unusable for four months, confirmed satellite imagery of structural damage at Tel Nof Airbase, devastation of the Beersheba cyberwarfare base, $150-200 million in damage to the Haifa oil refinery, and at least five military facilities directly struck according to the Telegraph. Israeli journalist Raviv Drucker reported that “many strikes went unreported” and that “we were also deterred.” So much for the clean victory.

But the damage to Israel is secondary. The primary problem is the damage to the interceptor stockpile. The United States expended approximately 150 THAAD missiles in twelve days—roughly 25% of total production since 2010. Eighty-odd SM-3s were consumed. Israel was running low on Arrow interceptors by war’s end. FY26 authorized procurement of 37 new THAAD rounds. Twelve days of defending against 500 missiles consumed years of production and a quarter of the cumulative stockpile.

Iran began the war with 2,500-3,000 missiles. They fired 550. This means Iran retained 1,950 to 2,450 missiles post-war. They’ve had eight months to build and otherwise acquire more missiles, disperse them, and harden their launch sites. The interceptor math does not work for a second round. This is not analysis. It is arithmetic. And the more significant danger is if either the Chinese or the Russians have helped them reduce their margin of error from 1 kilometer to 500 meters or less.

Just this week, something happened that the press mentioned in passing and clearly failed to understand the implications. The PLA and MizarVision published high-resolution satellite imagery pinpointing American military assets across the Middle East. Eighteen F-35s and six EA-18G Growlers at Muwaffaq Salti in Jordan. Patriot positions at Al Udeid. THAAD deployments in Jordan. The PLA produced a video titled “Siege of Iran” showing eight US bases under continuous satellite surveillance, with real-time maritime tracking of carrier groups via Yaogan satellites.

This was not an intelligence leak. It was a gift to Tehran, delivered publicly, with the PLA’s name on it.

The significance is not the obvious warning, but what it enables. Iran has completed its transition from GPS to BeiDou-3 for missile guidance, which means it is now encrypted, jam-resistant, and isn’t subject to American denial-of-service attacks. During the June war, GPS jamming was one of the most effective defensive measures against Iranian missiles using satellite terminal guidance. That vulnerability has been eliminated. Combined with Chinese satellite targeting data showing the exact coordinates of every defensive position, fuel depot, and aircraft shelter in the theater, Iran can shift from the saturation tactics of June to more accurate time-sensitive strikes against specific targets.

Former CENTCOM commander Votel dismissed the Chinese and Russian naval presence in the Strait of Hormuz as “an easy way to show support” that “doesn’t fundamentally change anything.” This is the kind of assessment that sounds reasonable if you think military support means destroyers, and sounds idiotic if you understand that ISR is the decisive enabler of modern precision warfare and that China is providing exactly that. The next Iranian missile will originate from Iranian soil. Its targeting data will have traversed Chinese satellites. No Chinese ship needs to fire a single missile for this to fundamentally change the equation.

The American analytical establishment is organized by regional command. CENTCOM watches the Middle East. EUCOM watches Europe. INDOPACOM watches the Pacific. Nobody’s job is to watch all three simultaneously, which is why nobody in Washington can see the obvious.

Iran: Two carrier strike groups committed, hundreds of aircraft, the largest Middle East deployment in two decades. Iran can’t fold because the regime’s survival calculus has inverted—6,000 protesters killed in December, the rial down 90% since 2018, senior officials telling Khamenei that fear is no longer a deterrent. The Libya precedent governs: Gaddafi disarmed and died in a ditch. Iran’s leaders would rather fight and die than capitulate and die, and they’re now better armed for the second round than they were for the first.

Ukraine: Russia is not “bogged down” and it never was. Russian forces are optimized for modern attrition drone warfare and are methodically advancing. Putin stated in December that “interest in withdrawal has been reduced to zero.” Ukrainian assessments give Russia a 12-18 month window for an Odessa operation, with the summer 2026 offensive already in preparation. Odessa’s fall makes Ukraine landlocked, which marks an end to maritime trade, an end to grain exports, and the end of the war. Every interceptor America fires in the Persian Gulf is one unavailable for European defense. The Russians have an obvious incentive to keep the US occupied in the Middle East during the Odessa push.

Taiwan: No carrier surge. No unusual PLA mobilization. No amphibious lift concentration. Nothing that triggers the satellite-watchers and wargamers.

That’s because the operation isn’t going to be a military one.

The CCP’s annual Taiwan Work Conference in February identified four priorities for 2026: unite “patriotic” forces in Taiwan; integrate PRC-Taiwanese supply chains while weakening US-Taiwanese ones; strengthen the legal basis for unification; and establish a task force using United Front work and cyberspace operations to damage the DPP in upcoming municipal elections.

The KMT isn’t being coerced into this. Chairwoman Cheng Li-wun has publicly and repeatedly sought engagement with Xi. PRC state media reported approvingly on her cross-strait policies. The CCP is transforming the KMT into a recognized party able to speak on Taiwan’s behalf, into a parallel diplomatic channel that bypasses the elected DPP government entirely.

Taiwan’s domestic politics just happen to be cooperating in harmony with this development. Constitutional crises, legislative paralysis, opposition attempts to remove President Lai and his cabinet, mass recall elections, and gridlock of the court system. The AEI/ISW assessment, from analysts who are actively unsympathetic to unification, recognize the instability of the situation: “The CCP can exploit this gridlock and general distrust in Taiwanese institutions to undermine the legitimacy of Taiwan’s government and present itself as a preferable alternative.”

The fishing militia exercises are relevant here, but not as the invasion rehearsal the military analysts believe them to be, but as economic coercion capability demonstration. Between 1,400 and 2,000 PRC fishing boats mobilized in blockade-like formations in December and January. Taiwan’s Coast Guard expanded its “suspicious vessel” list from 300 to 1,900 in response. This doesn’t signal D-Day. It signals the ability to strangle the island economically at will, and therefore the cost of resistance to any incoming government considering whether to cooperate with Beijing or not.

The path forward isn’t complicated. The KMT wins municipal elections. The DPP is discredited. A political crisis—manufactured or organic—produces a change of government. The new government invites dialogue, accepts a framework for integration, and stands the military down. What, precisely, is the US going to invade to prevent? It cannot defend a government that does not wish to be defended. It cannot maintain an alliance with a country whose leadership has chosen the other side.

The military analysts build their models of Taiwan as if Xi Jinping were a US president and someone who receives briefings about a faraway island he’s never visited and doesn’t know very well. This is a fundamental misunderstanding of the situation and the Chinese president.

Xi spent seventeen years in Fujian Province, directly across the strait from Taiwan. Vice mayor of Xiamen, party secretary of Fuzhou, governor of the province, and simultaneously head of the Party Committee’s Leading Group for Taiwan Affairs. His specific job for nearly two decades was courting the top Taiwanese businessmen with tax incentives, land deals, and government support. Xiamen and Fuzhou became the primary hubs for Taiwanese investment on the mainland under his direct management. He opened the direct shipping routes between Xiamen and Kinmen. The cross-strait economic integration model that later became national policy was his personal creation, built from the ground up at the provincial level.

Then five years in Zhejiang, which is the other major destination for Taiwanese investment, followed by Shanghai. He staffed his government accordingly. Zheng Shanjie, now the NDRC chairman, started as a local official in Xiamen when Xi was deputy mayor. In a “surprise” career move, Zheng was appointed deputy director of the Taiwan Office. This should not surprise anyone who has been paying attention.

Xi doesn’t need intelligence briefings about the Taiwanese business elite. He’s known them for thirty years. He knows who’s leveraged, who owes him favors, who’s sympathetic to unification, and who can lean on others. A political transition doesn’t require tanks. It requires the right phone calls to the right people at the right moment, and Xi has spent his entire career assembling the right numbers.

Washington’s analytical failure on Taiwan isn’t an intelligence failure. It’s a cultural failure.

The entire American strategic establishment runs on Clausewitzian concepts: war as politics by other means, identify the center of gravity, mass force, achieve decisive battle. That’s how they think about Taiwan, in terms of carrier groups, kill chains, amphibious lift ratios. The analytical infrastructure is organized around “can China successfully invade?” as if that were the relevant question. But it’s not.

Sun Tzu’s hierarchy of strategic excellence ranks the highest achievement as defeating the enemy’s strategy, followed by disrupting his alliances, then attacking his army, with besieging walled cities at the bottom—the mark of failure, the option you resort to when everything else has gone wrong. An amphibious invasion of Taiwan is literally the lowest-ranked option in the strategic tradition Xi was educated in. Everything Beijing is actually doing—the economic integration, the KMT cultivation, the United Front work, the three-theater overextension of American forces—maps to the higher levels of the hierarchy. But the Pentagon keeps modeling the lowest one, because that’s the one they know how to wargame.

The entire PLA buildup may serve a dual purpose that the military analysts can’t see because they’re not trained to look for it: fixing Washington’s analytical attention on the invasion scenario, consuming defense budgets and strategic planning bandwidth on the wrong problem, while the actual operation proceeds through political channels. All warfare is based on deception, and the most elegant deception is one where the enemy sees exactly what you’re doing—building an invasion force—and draws exactly the wrong conclusion about what it’s for.

Xi Jinping is 72. He has broken every CCP institutional policy in order to remain in power. The 2027 Party Congress is where he has to either step down or pursue a fourth term. The centennial of the PLA’s founding falls the same year. Taiwan’s next presidential election is January 2028.

Mao founded the People’s Republic. Deng opened it to the world. Neither accomplished reunification with Taiwan island. I believe Xi intends unification to be his crowning legacy, and peaceful reunification would mark the superior achievement, not just in strategic and economic senses, but in the Chinese civilizational context. Military conquest would prove the PLA is strong. Peaceful reunification would prove that Chinese civilization’s gravitational pull is irresistible, that the Western model of strategic competition was defeated by patience and political art, and that the last holdout returned to the fold voluntarily. It would vindicate not just the CCP but the entire Sunzian tradition against the Clausewitzian one. The Americans spent trillions preparing for an invasion that never came while China won through asymmetric unrestricted warfare and 勢—the patient cultivation of positional advantage until the outcome becomes inevitable.

That would be a personal legacy that surpasses Mao, and Xi knows it.

The board is now set. Iran absorbs American attention and interceptor stocks. Russia pushes toward Odessa while the European governments begin to collapse under the weight of their impotence and corruption. The KMT builds its position inside Taiwan. Xi waits for the convergence, the right moment when US forces are committed, interceptors depleted, Europeans are helpless, Taiwan’s DPP is discredited, and the first quiet phone calls are made.

I don’t know the exact timeline. But I know the strategy, and I know about the man, and as an East Asian Studies major and armchair military historian, I know the tradition he operates in. From the Chinese perspective, the supreme art of war is to subdue the enemy without fighting a battle. And while we’re watching Iran, I suspect that’s exactly what’s happening.

DISCUSS ON SG


The Pieces are in Place

109 refueling planes. 250 fighter-bombers. 50 percent of the C17 fleet. 40 anti-radar planes. Both carriers are in place. All the pieces are set.

Most of the analysts are expecting the war to begin anywhere from later tonight to Tuesday. And Larry Johnson reports that the US military is anticipating 10,000 casualties, which I would think indicates at least one carrier sunk.

None of this makes any sense with regards to the US national interest unless a) something entirely different is going on and the target isn’t Iran or b) Clown World is calling the shots.

Either way, we’ll find out soon.

DISCUSS ON SG


When History Rhymes

I don’t know if Big Serge intended this post about Japan’s general strategy in the lead-up to WWII, or rather, the obvious lack of it, to be a warning relevant to the current situation facing the United States, but it’s educational regardless.

This is not a history of the Second Sino-Japanese War. For our purposes, however, three vital threads emerge from the beginning of that conflict. First, that the Japanese incorrectly anticipated a quick victory in northern China, after which they would begin to digest the region’s economic resources. Secondly, the rapid and unexpected expansion of the fighting in China created an enormous drain on Japanese resources which led directly to the economic pressures which created the Pacific War. Third, that same resource crunch sparked and escalated the inter-service disagreements and factionalism which characterized Japanese leadership throughout the war.

In the context of Japan’s larger imperial ambitions and strategy, it is difficult to imagine a more severe backfire than the decision to launch into northern China in 1937. Japanese planners initially hoped for a quick and decisive victory using limited forces. In July 1937, Army operational plans sketched out an offensive using just three divisions which were expected to overrun the Beijing area and crush the enemy’s main forces, at which point Chiang Kai-shek was expected to sue for peace. The idea that Chiang might still be in the field, fighting, even after the loss of both Shanghai and his capital at Nanking was unthinkable, but that is precisely what happened.

The natural result, therefore, was rapid and massive escalation of Japanese resource commitments in China as the war spilled its banks. The optimistic initial estimates – three divisions, three months, and a total cost of just 100 million yen – were swept aside, and the Japanese General Staff found itself preparing to mobilize the entire army for action on an indefinite timetable. Three divisions became twenty; 100 million yen became 2.5 billion.

The ballooning demands of the field army in China pushed Japan into a bona fide economic crisis. Tokyo initially hoped that the field army could finish the fight on those materials that had already been stockpiled in the theater, but these had been exhausted by the end of 1937, with no end to the conflict in sight. Munition and fuel stocks in China were on empty, but that was not all. Even the munitions stocks in Japan were barely sufficient to supply ongoing operations in China, which meant that a Soviet attack on Manchuria – a longstanding and ever present Japanese fear – could quickly create a critical situation.

In short, the stubborn refusal by Chiang to simply collapse and sue for terms as expected had created an enormous resource sink which forced Japan into a full war economy in a state of near crisis. Most disconcertingly, the only way for Japan to make up the critical shortfalls in key materials – above all fuels of all types – was by massively increasing imports from the United States.

The USA has already engaged in one attack on Iran. It appears now about to engage in a second one, this time with Russian and Chinese ships at the other end of the gulf. At the same time, it also has a weakening economy and an excessive dependence upon imports as well as foreign debt.

And, as I’ve already pointed out, in industrial terms, the USA is to China what Japan was to the USA in 1940…

DISCUSS ON SG


The Undefeatable Trilemma

For more than 2,000 years, the Agrippan Trilemma described by Sextus Empiricus has been considered one of the foundations of skepticism and a formulation that imposes fundamental limits on human knowledge. The modern version, known as Münchhausen’s Trilemma. is intended to demonstrate the theoretical impossibility of proving any truth, even in the fields of logic and mathematics, without appealing to accepted assumptions.

The Agrippan Trilemma is a central argument in ancient skepticism, often cited as one of the most powerful challenges to the possibility of rational justification and knowledge. It is traditionally attributed to Agrippa the Skeptic, a figure associated with the later Pyrrhonian school, and is known primarily through the writings of Sextus Empiricus (circa 2nd–3rd century CE).

Agrippa is said to have formulated a set of “modes” (or tropes) designed to induce suspension of judgment (epoché). Among these, the mode concerning disagreement, infinite regress, and relativity plays a key role in the development of the trilemma. Over time, later philosophers systematized one strand of this skeptical strategy into what is now commonly called the Agrippan Trilemma.

In modern philosophy, the trilemma is closely related to what is sometimes called the Münchhausen Trilemma (popularized in 20th‑century discussions of justification, especially in philosophy of science and critical rationalism). Despite terminological variations, the core idea remains the same: attempts to justify any belief ultimately fall into one of three unsatisfactory patterns.

Structure of the Trilemma

The Agrippan Trilemma targets the structure of justification rather than any specific belief. It begins from the assumption that for a belief to be epistemically justified, it must be supported by reasons. Once that demand for reasons is taken seriously and pushed consistently, three—and only three—kinds of justificatory structure seem possible:

  • Infinite Regress
  • Circular Reasoning
  • Dogmatic Stopping Point

Infinite regress: Every belief is justified by another belief, which itself requires justification, and so on without end. The chain of reasons extends infinitely, and no belief is ever supported by a “final” or self-sufficient foundation. Skeptics argue that such an endless chain is unsatisfactory because finite cognitive agents can never survey or possess the entire infinite series. Hence, no belief is fully justified in the strong, non-skeptical sense that was initially demanded.

Circular reasoning: The chain of justification eventually loops back: belief A is supported by belief B, belief B by belief C, and at some point a belief further down the chain supports A again. This yields epistemic circularity.

Skeptical critiques maintain that circular justification is vicious: it presupposes what it claims to prove and therefore fails to add any independent support. The belief is “supported” only by itself, directly or indirectly.

Dogmatic stopping point: At some stage, one simply stops asking for reasons and treats a belief or set of beliefs as basic, self-evident, or in no further need of justification. The regress is halted not by further argument but by stipulation or intuition.

From the skeptical perspective, such stopping points are dogmatic: they seemingly violate the original demand that every belief be supported by reasons. If some beliefs are exempted, skeptics ask why those particular beliefs are privileged rather than others.

The trilemma thus claims that any attempt to justify a belief must fall into one of these three patterns, and that each option is epistemically problematic. For Pyrrhonian skeptics, this supports the suspension of judgment rather than dogmatic assertions about what is known.

Philosophical Significance

  • The Agrippan Trilemma remains a foundational challenge in contemporary epistemology and philosophy of science. Its impact includes:
  • Clarifying theories of justification: Foundationalism, coherentism, and infinitism are often organized around their responses to the trilemma, helping structure debates in analytic epistemology.
  • Fueling skepticism: For many, the trilemma encapsulates the skeptical problem: if no justification structure escapes its horns, robust claims to knowledge are difficult to defend.

Highlighting meta‑epistemological questions: The trilemma raises questions not only about which beliefs are justified but also about what counts as justification and whether our demands for justification are themselves reasonable.

Philosophers disagree about whether the trilemma is logically decisive or merely exposes tensions in overly ambitious conceptions of knowledge. Some regard it as an argument that strict foundational justification is impossible; others treat it as a methodological warning rather than a conclusive refutation of knowledge.

This sounds like a reasonable challenge for Veriphysics and the Triveritas, don’t you think? Darwin and Kimura are one thing, but one of the prime jewels of philosophy, recognized for its intellectual formidability for nearly 2,000 years, and further honed by modern philosophers, is another matter entirely, wouldn’t you say?

Gemini certainly views it as a significant construction.

The sheer elegance of the trilemma lies in its inescapable simplicity. It forces intellectual humility by proving that all human knowledge ultimately rests on unprovable foundations. I would rank the Agrippan trilemma as a “Tier 1” philosophical concept, placing it alongside the very few ideas that have fundamentally permanently altered how humanity perceives its own understanding of reality.

So Vox Day and Claude Athos vs a 2,000-year-old Tier 1 philosophical concept. The Triveritas vs the Trilemma.

Care to place your bets?

DISCUSS ON SG


Veriphysics: The Treatise 020

III. Aletheian Realism: The Metaphysical Foundation

Every philosophy rests on metaphysical foundations, whether acknowledged or not. The Enlightenment claimed to have no metaphysics, and to operate on pure reason and empirical observation alone. This was merely another level of its characteristic deception. The Enlightenment’s commitments to the autonomy of reason, the mechanical nature of the universe, the distinction between objective facts and subjective values were metaphysical through and through. They were simply unexamined metaphysics, held dogmatically while the Enlightenment’s philosophers congratulated themselves on having transcended dogma.

Veriphysics makes its metaphysical foundations explicit. It rests on what may be called Aletheian Realism: the conjunction of a particular understanding of truth with a commitment to the reality and knowability of the world.

The term aletheia is Greek, usually translated as “truth.” But the etymology of the term suggests something richer: a-letheia, un-concealment, the condition of being revealed rather than hidden. Truth, in this understanding, is not primarily a property of propositions but a fundamental feature of reality itself. Things are true insofar as they are unconcealed, disclosed, available to be known. The mind does not construct truth; it discovers it. Truth exists in its own right, prior to inquiry, as inquiry is merely the process by which elements of the truth become manifest to the inquirer.

This understanding stands opposed to the Enlightenment’s characteristic theories of truth. The correspondence theory, in its Enlightenment form, treated truth as a relation between propositions and facts, verified by method. The coherence theory treated truth as internal consistency within a system of beliefs. The pragmatic theory treated truth as what works, what enables successful prediction and action. Each of these theories makes truth dependent on human activity, dependent upon our propositions, our systems, and our purposes. Aletheian Realism reverses the dependency. Truth is what already is, therefore our propositions, systems, and purposes are only true insofar as they conform to it.

Realism, the second component, affirms that the world exists independently of our knowledge of it and that our knowledge genuinely discloses the world’s nature. This is the Aristotelian inheritance: universals are grounded in particulars, known through abstraction from sense experience, real features of things rather than mere names or mental constructs. Against nominalism, which reduces kinds to convenient labels, Aletheian Realism holds that the natural kinds are real and that the distinction between gold and iron, between oak and maple, between man and beast, reflects the proper structure of reality, not merely the conventions of language. Against idealism, which makes the world dependent on mind, Aletheian Realism holds that the world would exist and have its character even if no mind perceived it. It does not depend upon either the observer or the speaker.

But Aletheian Realism is not naive realism. It does not claim that human knowledge is infallible, complete, or perspectiveless. It acknowledges that we know from particular positions, through particular faculties, with particular limitations. The glass through which we see is real—it shapes and constrains what we perceive. But what we perceive through it is also real. The task of inquiry is to clarify the glass, to correct for its distortions, to bring the image into sharper focus—not to imagine that we can dispense with the glass altogether and see as God sees.

This brings us to the concept of participation. The Platonic tradition, Christianized by the Church Fathers and the Scholastics, understood human knowledge as a participation in divine knowledge. God knows all things perfectly, immediately, exhaustively. Human beings know some things, imperfectly, mediately, partially. But the partial knowledge is not disconnected from the perfect knowledge; it participates in it. The truths we grasp are fragments of the Truth that God is. Our knowledge is not merely analogous to divine knowledge; it is a finite sharing in it, made possible by the fact that we are created in the image of a God who knows.

This participatory understanding grounds both confidence and humility. Confidence: we really know. Our knowledge is not illusion, not projection, not social construction. It is genuine apprehension of genuine reality. Humility: we do not know exhaustively. Our knowledge is partial, corrigible, open to refinement. The darkness of the glass through which we see is not total, but it is real. The fullness of sight awaits a condition we have not yet attained, a state to which we have not yet ascended.

The medieval doctrine of the transcendentals completes the picture. Being, truth, goodness, and beauty are convertible. What is, is true, is intrinsically good, and is ultimately beautiful. These are not separate properties accidentally conjoined but different aspects of a single reality, distinguishable in thought and perception but united in essence. The Enlightenment’s separation of fact and value, its insistence that science tells us what is while ethics tells us what ought to be, and never the twain shall meet, was a metaphysical error with catastrophic consequences. This distinction made values arbitrary, subjective, and groundless. It rendered facts meaningless, brute, devoid of significance. Aletheian Realism reunites what should never have been severed. To know the truth about a thing is already to know something about its goodness; to apprehend reality is already to be oriented toward its value and its beauty. Knowledge is inherently normative.

The separation of fact and value is not a discovery but a mistake.

You can now buy the complete Veriphysics: The Treatise at Amazon in both Kindle and audiobook formats if you’d like to read ahead or have it available as a reference. Thanks to many of the readers here, it is presently a #1 bestseller in both Epistemology and Metaphysics.

DISCUSS ON SG