plant lover, cookie monster, shoe fiend
17070 stories
·
20 followers

Nitrogen dioxide exposure, health outcomes, and associated demographic disparities due to gas and propane combustion by U.S. stoves | Science Advances

1 Comment and 3 Shares
Read the whole story
acdha
16 hours ago
reply
Natural gas is more than a climate change problem, and I note that we could replace a lot of equipment with the estimated $1B annual cost estimated in this study
Washington, DC
sarcozona
2 hours ago
reply
Epiphyte City
Share this story
Delete

The Lunacy of Artemis (Idle Words)

1 Share
1.1.2023

The Lunacy of Artemis

distant photo of Artemis rocket on launch pad

A little over 51 years ago, a rocket lifted off from Cape Canaveral carrying three astronauts and a space car. After a three day journey to the moon, two of the astronauts climbed into a spindly lander and made the short trip down to the surface, where for another three days they collected rocks and did donuts in the space car. Then they climbed back into the lander, rejoined their colleague in orbit, and departed for Earth. Their capsule splashed down in the South Pacific on December 19, 1972. This mission, Apollo 17, would be the last time human beings ventured beyond low Earth orbit.

If you believe NASA, late in 2026 Americans will walk on the moon again. That proposed mission is called Artemis 3, and its lunar segment looks a lot like Apollo 17 without the space car. Two astronauts will land on the moon, collect rocks, take selfies, and about a week after landing rejoin their orbiting colleagues to go back to Earth.

But where Apollo 17 launched on a single rocket and cost $3.3 billion (in 2023 dollars), the first Artemis landing involves a dozen or two heavy rocket launches and costs so much that NASA refuses to give a figure (one veteran of NASA budgeting estimates it at $7-10 billion).[1] The single-use lander for the mission will be the heaviest spacecraft ever flown, and yet the mission's scientific return—a small box of rocks—is less than what came home on Apollo 17. And the whole plan hinges on technologies that haven't been invented yet becoming reliable and practical within the next eighteen months.

You don’t have to be a rocket scientist to wonder what’s going on here. If we can put a man on the moon, then why can't we just go do it again? The moon hasn’t changed since the 1960’s, while every technology we used to get there has seen staggering advances. It took NASA eight years to go from nothing to a moon landing at the dawn of the Space Age. But today, twenty years and $93 billion after the space agency announced our return to the moon, the goal seems as far out of reach as ever.[2]

Articles about Artemis often give the program’s tangled backstory. But I want to talk about Artemis as a technical design, because there’s just so much to drink in. While NASA is no stranger to complex mission architectures, Artemis goes beyond complex to the just plain incoherent. None of the puzzle pieces seem to come from the same box. Half the program requires breakthrough technologies that make the other half unnecessary. The rocket and spacecraft NASA spent two decades building can’t even reach the moon. And for reasons no one understands, there’s a new space station in the mix.

In the past, whatever oddball project NASA came up with, we at least knew they could build the hardware. But Artemis calls the agency’s competence as an engineering organization into question. For the first time since the early 1960's, it's unclear whether the US space agency is even capable of putting astronauts on the Moon.

Photograph of SLS rocket

A Note on Apollo

In this essay I make a lot of comparisons to Project Apollo. This is not because I think other mission architectures are inferior, but because the early success of that program sets such a useful baseline. At the dawn of the Space Age, using rudimentary technology, American astronauts landed on the moon six times in seven attempts. The moon landings were NASA’s greatest achievement and should set a floor for what a modern mission, flying modern hardware, might achieve.

Advocates for Artemis insist that the program is more than Apollo 2.0. But as we’ll see, Artemis can't even measure up to Apollo 1.0. It costs more, does less, flies less frequently, and exposes crews to risks that the steely-eyed missile men of the Apollo era found unacceptable. It's as if Ford in 2024 released a new model car that was slower, more accident-prone, and ten times more expensive than the Model T.

When a next-generation lunar program can’t meet the cost, performance, or safety standards set three generations earlier, something has gone seriously awry.

Photograph of SLS rocket

I. The Rocket

The jewel of Artemis is a big orange rocket with a flavorless name, the Space Launch System (SLS). SLS looks like someone started building a Space Shuttle and ran out of legos for the orbiter. There is the familiar orange tank, a big white pair of solid rocket boosters, but then the rocket just peters out in a 1960’s style stack of cones and cylinders.

The best way to think of SLS is as a balding guy with a mullet: there are fireworks down below that are meant to distract you from a sad situation up top. In the case of the rocket, those fireworks are a first stage with more thrust than the Saturn V, enough thrust that the boosted core stage can nearly put itself into orbit. But on top of this monster sits a second stage so anemic that even its name (the Interim Cryogenic Propulsion Stage) is a kind of apology. For eight minutes SLS roars into the sky on a pillar of fire. And then, like a cork popping out of a bottle, the tiny ICPS emerges and drifts vaguely moonwards on a wisp of flame.

With this design, the minds behind SLS achieved a first in space flight, creating a rocket that is at the same time more powerful and less capable than the Saturn V. While the 1960’s giant could send 49 metric tons to the Moon, SLS only manages 27 tons—not enough to fly an Apollo-style landing, not enough to even put a crew in orbit around the Moon without a lander. The best SLS can do is slingshot the Orion spacecraft once around the moon and back, a mission that will fly under the name Artemis 2.

NASA wants to replace ICPS with an ‘Exploration Upper Stage’ (the project has been held up, among other things, by a near-billion dollar cost overrun on a launch pad). But even that upgrade won’t give SLS the power of the Saturn V. For whatever reason, NASA designed its first heavy launcher in forty years to be unable to fly the simple, proven architecture of the Apollo missions.

Of course, plenty of rockets go on to enjoy rewarding, productive careers without being as powerful as the Saturn V. And if SLS rockets were piling up at the Michoud Assembly Facility like cordwood, or if NASA were willing to let its astronauts fly commercial, it would be a simple matter to split Artemis missions across multiple launches.

But NASA insists that astronauts fly SLS. And SLS is a “one and done” rocket, artisanally hand-crafted by a workforce that likes to get home before traffic gets bad. The rocket can only launch once every two years at a cost of about four billion dollars[3]—about twice what it would cost to light the rocket’s weight in dollar bills on fire[4].

Early on, SLS designers made the catastrophic decision to reuse Shuttle hardware, which is like using Fabergé eggs to save money on an omelette. The SLS core stage recycles Space Shuttle main engines, actual veterans of old Shuttle flights called out of retirement for one last job. Refurbishing a single such engine to work on SLS costs NASA $40 million, or a bit more than SpaceX spends on all 33 engines on its Superheavy booster.[5] And though the Shuttle engines are designed to be fully reusable (the main reason they're so expensive), every SLS launch throws four of them away. Once all the junkyards are picked clean, NASA will pay Aerojet Rocketdyne to restart production of the classic engine at a cool unit cost of $145 million[6].

The story is no better with the solid rocket boosters, the other piece of Shuttle hardware SLS reuses. Originally a stopgap measure introduced to save the Shuttle budget, these heavy rockets now attach themselves like barnacles to every new NASA launcher design. To no one’s surprise, retrofitting a bunch of heavy steel casings left over from Shuttle days has saved the program nothing. Each SLS booster is now projected to cost $266 million, or about twice the launch cost of a Falcon Heavy.[7] Just replacing the asbestos lining in the boosters with a greener material, a project budgeted at $4.4M, has now cost NASA a quarter of a billion dollars. And once the leftover segments run out seven rockets from now, SLS will need a brand new booster design, opening up fertile new vistas of overspending.

Costs on SLS have reached the point where private industry is now able to develop, test, and launch an entire rocket program for less than NASA spends on a single engine[8]. Flying SLS is like owning a classic car—everything is hand built, the components cost a fortune, and when you finally get the thing out of the shop, you find yourself constantly overtaken by younger rivals.

But the cost of SLS to NASA goes beyond money. The agency has committed to an antiquated frankenrocket just as the space industry is entering a period of unprecedented innovation. While other space programs get to romp and play with technologies like reusable stages and exotic alloys, NASA is stuck for years wasting a massive, skilled workforce on a dead-end design.

The SLS program's slow pace also affects safety. Back in the Shuttle era, NASA managers argued that it took three to four launches a year to keep workers proficient enough to build and launch the vehicles safely. A boutique approach where workers hand-craft one rocket every two years means having to re-learn processes and procedures with every launch.

It also leaves no room in Artemis for test flights. The program simply assumes success, flying all its important 'firsts' with astronauts on board. When there are unanticipated failures, like the extensive heat shield spalling and near burn-through observed in Artemis 1,[9] the agency has no way to test a proposed fix without a multi-year delay to the program. So they end up using indirect means to convince themselves that a new design is safe to fly, a process ripe for error and self-delusion.

Orion space capsule with OVERSIZE LOAD banner

II. The Spacecraft

Orion, the capsule that launches on top of SLS, is a relaxed-fit reimagining of the Apollo command module suitable for today’s larger astronaut. It boasts modern computers, half again as much volume as the 1960’s design, and a few creature comforts (like not having to poop in a baggie) that would have pleased the Apollo pioneers.

The capsule’s official name is the Orion Multipurpose Crew Vehicle, but finding even a single purpose for Orion has greatly challenged NASA. For twenty years the spacecraft has mostly sat on the ground, chewing through a $1.2 billion annual budget. In 2014, the first Orion flew a brief test flight. Eight short years later, Orion launched again, carrying a crew of instrumented mannequins around the Moon on Artemis 1. In 2025 the capsule (by then old enough to drink) is supposed to fly human passengers on Artemis 2.

Orion goes to space attached to a basket of amenities called the European Service Module. The ESM provides Orion with solar panels, breathing gas, batteries, and a small rocket that is the capsule’s principal means of propulsion. But because the ESM was never designed to go to the moon, it carries very little propellant—far too little to get the hefty capsule in and out of lunar orbit.[10]

And Orion is hefty. Originally designed to hold six astronauts, the capsule was never resized when the crew requirement shrank to four. Like an empty nester’s minivan, Orion now hauls around a bunch of mass and volume that it doesn’t need. Even with all the savings that come from replacing Apollo-era avionics, the capsule weighs almost twice as much as the Apollo Command Module.

This extra mass has knock-on effects across the entire Artemis design. Since a large capsule needs a large abort rocket, SLS has to haul Orion's massive Launch Abort System—seven tons of dead weight—nearly all the way into orbit. And reinforcing the capsule so that abort system won't shake the astronauts into jelly means making it heavier, which puts more demand on the parachutes and heat shield,[11] and around and around we go.

Orion space capsule with OVERSIZE LOAD banner

Size comparison of the Apollo command and service module (left) and Orion + European Service Module (right)

What’s particularly frustrating is that Orion and ESM together have nearly the same mass as the Apollo command and service modules, which had no trouble reaching the Moon. The difference is all in the proportions. Where Apollo was built like a roadster, with a small crew compartment bolted onto an oversized engine, Orion is the Dodge Journey of spacecraft—a chunky, underpowered six-seater that advertises to the world that you're terrible at managing money.

diagram of near-rectilinear halo orbit

III. The Orbit

The fact that neither its rocket or spaceship can get to the Moon creates difficulties for NASA’s lunar program. So, like an aging crooner transposing old hits into an easier key, the agency has worked to find a ‘lunar-adjacent’ destination that its hardware can get to.

Their solution is a bit of celestial arcana called Near Rectilinear Halo Orbit, or NRHO. A spacecraft in this orbit circles the moon every 6.5 days, passing 1,000 kilometers above the lunar north pole at closest approach, then drifting out about 70,000 kilometers (a fifth of the Earth/Moon distance) at its furthest point. Getting to NRHO from Earth requires significantly less energy than entering a useful lunar orbit, putting it just within reach for SLS and Orion.[12]

To hear NASA tell it, NRHO is so full of advantages that it’s a wonder we stay on Earth. Spacecraft in the orbit always have a sightline to Earth and never pass through its shadow. The orbit is relatively stable, so a spacecraft can loiter there for months using only ion thrusters. And the deep space environment is the perfect place to practice going to Mars.

But NRHO is terrible for getting to the moon. The orbit is like one of those European budget airports that leaves you out in a field somewhere, requiring an expensive taxi. In Artemis, this taxi takes the form of a whole other spaceship—the lunar lander—which launches without a crew a month or two before Orion and is supposed to be waiting in NRHO when the capsule arrives.

Once these two spacecraft dock together, two astronauts climb into the lander from Orion and begin a day-long descent to the lunar surface. The other two astronauts wait for them in NRHO, playing hearts and quietly absorbing radiation.

Apollo landings also divided the crew between lander and orbiter. But those missions kept the command module in a low lunar orbit that brought it over the landing site every two hours. This proximity between orbiter and lander had enormous implications for safety. At any point in the surface mission, the astronauts on the moon could climb into the ascent rocket, hit the big red button, and be back sipping Tang with the command module pilot by bedtime. The short orbital period also gave the combined crew a dozen opportunities a day to return directly to Earth. [13]

Sitting in NRHO makes abort scenarios much harder. Depending on when in the mission it happens, a stricken lander might need three or more days to catch up with the orbiting Orion. In the worst case, the crew might find themselves stuck on the lunar surface for hours after an abort is called, forced to wait for Orion to reach a more favorable point in its orbit. And once everyone is back on Orion, more days might pass before the crew can depart for Earth. These long and variable abort times significantly increase risk to the crew, making many scenarios that were survivable on Apollo (like Apollo 13!) lethal on Artemis. [14]

The abort issue is just one example of NRHO making missions slower. NASA likes to boast that Orion can stay in space far longer than Apollo, but this is like bragging that you’re in the best shape of your life after the bank repossessed your car. It's an oddly positive spin to put on bad life choices. The reason Orion needs all that endurance is because transit times from Earth to NRHO are long, and the crew has to waste additional time in NRHO waiting for orbits to line up. The Artemis 3 mission, for example, will spend 24 days in transit, compared to just 6 days on Apollo 11.

NRHO even dictates how long astronauts stay on the Moon—surface time has to be a multiple of the 6.5 day orbital period. This lack of flexibility means that even early flag-and-footprints missions like Artemis 3 have to spend at least a week on the moon, a constraint that adds considerable risk to the initial landing. [15]

In spaceflight, brevity is safety. There's no better way to protect astronauts from the risks of solar storms, mechanical failure, and other mishaps than by minimizing slack time in space. Moreover, a safe architecture should allow for a rapid return to Earth at any point in the mission. There’s no question astronauts on the first Artemis missions would be better off with Orion in low lunar orbit. The decision to stage from NRHO is an excellent example of NASA designing its lunar program in the wrong direction—letting deficiencies in the hardware dictate the level of mission risk. 

diagram of Gateway

Early diagram of Gateway. Note that the segment marked 'human lander system' now dwarfs the space station.

IV. Gateway

I suppose at some point we have to talk about Gateway. Gateway is a small modular space station that NASA wants to build in NRHO. It has been showing up across various missions like a bad smell since before 2012.

Early in the Artemis program, NASA described Gateway as a kind of celestial truck stop, a safe place for the lander to park and for the crew to grab a cup of coffee on their way to the moon. But when it became clear that Gateway would not be ready in time for Artemis 3, NASA re-evaluated. Reasoning that two spacecraft could meet up in NRHO just as easily as three, the agency gave permission for the first moon landing to proceed without a space station.

Despite this open admission that Gateway is unnecessary, building the space station remains the core activity of the Artemis program. The three missions that follow that first landing are devoted chiefly to Gateway assembly. In fact, initial plans for Artemis 4 left out a lunar landing entirely, as if it were an inconvenience to the real work being done up in orbit.

This is a remarkable situation. It’s like if you hired someone to redo your kitchen and they started building a boat in your driveway. Sure, the boat gives the builders a place to relax, lets them practice tricky plumbing and finishing work, and is a safe place to store their tools. But all those arguments will fail to satisfy. You still want to know what building a boat has to do with kitchen repair, and why you’re the one footing the bill.

NASA has struggled to lay out a technical rationale for Gateway. The space station adds both cost and complexity to Artemis, a program not particularly lacking in either. Requiring moon-bound astronauts to stop at Gateway also makes missions riskier (by adding docking operations) while imposing a big propellant tax. Aerospace engineer and pundit Robert Zubrin has aptly called the station a tollbooth in space.

Even Gateway defenders struggle to hype up the station. A common argument is that Gateway may not ideal for any one thing, but is good for a whole lot of things. But that is the same line of thinking that got us SLS and Orion, both vehicles designed before anyone knew what to do with them. The truth is that all-purpose designs don't exist in human space flight. The best you can do is build a spacecraft that is equally bad at everything.

But to search for technical grounds is to misunderstand the purpose of Gateway. The station is not being built to shelter astronauts in the harsh environment of space, but to protect Artemis in the harsh environment of Congress. NASA needs Gateway to navigate an uncertain political landscape in the 2030’s. Without a station, Artemis will just be a series of infrequent multibillion dollar moon landings, a red cape waved in the face of the Office of Management and Budget. Gateway armors Artemis by bringing in international partners, each of whom contributes expensive hardware. As NASA learned building the International Space Station, this combination of sunk costs and international entanglement is a powerful talisman against program death.

Gateway also solves some other problems for NASA. It gives SLS a destination to fly to, stimulates private industry (by handing out public money to supply Gateway), creates a job for the astronaut corps, and guarantees the continuity of human space flight once the ISS becomes uninhabitable sometime in the 2030’s. [16]

That last goal may sound odd if you don’t see human space flight as an end in itself. But NASA is a faith-based organization, dedicated to the principle that taxpayers should always keep an American or two in orbit. it’s a little bit as if the National Oceanic Atmospheric Administration insisted on keeping bathyscapes full of sailors at the bottom of the sea, irrespective of cost or merit, and kneecapped programs that might threaten the continuous human benthic presence. You can’t argue with faith.

From a bureaucrat’s perspective, Gateway is NASA’s ticket back to a golden era in the early 2000's when the Space Station and Space Shuttle formed an uncancellable whole, each program justifying the existence of the other. Recreating this dynamic with Gateway and SLS/Orion would mean predictable budgets and program stability for NASA well into the 2050’s.

But Artemis was supposed to take us back to a different golden age, the golden age of Apollo. And so there’s an unresolved tension in the program between building Gateway and doing interesting things on the moon. With Artemis missions two or more years apart, it’s inevitable that Gateway assembly will push aspirational projects like a surface habitat or pressurized rover out into the 2040’s. But those same projects are on the critical path to Mars, where NASA still insists we’re going in the late 2030’s. The situation is awkward.

So that is the story of Gateway—unloved, ineradicable, and as we’ll see, likely to become the sole legacy of the Artemis program. 

artist's rendering of human landing system'

V. The Lander

The lunar lander is the most technically ambitious part of Artemis. Where SLS, Orion, and Gateway are mostly a compilation of NASA's greatest hits, the lander requires breakthrough technologies with the potential to revolutionize space travel.

Of course, you can’t just call it a lander. In Artemis speak, this spacecraft is the Human Landing System, or HLS. NASA has delegated its design to two private companies, Blue Origin and SpaceX. SpaceX is responsible for landing astronauts on Artemis 3 and 4, while Blue Origin is on the hook for Artemis 5 (notionally scheduled for 2030). After that, the agency will take competitive bids for subsequent missions.

The SpaceX HLS design is based on their experimental Starship spacecraft, an enormous rocket that takes off on and lands on its tail, like 1950’s sci-fi. There is a strong “emperor’s new clothes” vibe to this design. On the one hand, it is the brainchild of brilliant SpaceX engineers and passed NASA technical review. On the other hand, the lander seems to go out of its way to create problems for itself to solve with technology.

artist's rendering of human landing system'

An early SpaceX rendering of the Human Landing System, with the Apollo Lunar Module added for scale.

To start with the obvious, HLS looks more likely to tip over than the last two spacecraft to land on the moon, which tipped over. It is a fifteen story tower that must land on its ass in terrible lighting conditions, on rubble of unknown composition, over a light-second from Earth. The crew are left suspended so high above the surface that they need a folding space elevator (not the cool kind) to get down. And yet in the end this single-use lander carries less payload (both up and down) than the tiny Lunar Module on Apollo 17. Using Starship to land two astronauts on the moon is like delivering a pizza with an aircraft carrier.

Amusingly, the sheer size of the SpaceX design leaves it with little room for cargo. The spacecraft arrives on the Moon laden with something like 200 tons of cryogenic propellant,[14] and like a fat man leaving an armchair, it needs every drop of that energy to get its bulk back off the surface. Nor does it help matters that all this cryogenic propellant has to cook for a week in direct sunlight.

Other, less daring lander designs reduce their appetite for propellant by using a detachable landing stage. This arrangement also shields the ascent rocket from hypervelocity debris that gets kicked up during landing. But HLS is a one-piece rocket; the same engines that get sandblasted on their way down to the moon must relight without fail a week later.

Given this fact, it’s remarkable that NASA’s contract with SpaceX doesn’t require them to demonstrate a lunar takeoff. All SpaceX has to do to satisfy NASA requirements is land an HLS prototype on the Moon. Questions about ascent can then presumably wait until the actual mission, when we all find out together with the crew whether HLS can take off again.[15]

This fearlessness in design is part of a pattern with Starship HLS. Problems that other landers avoid in the design phase are solved with engineering. And it’s kind of understandable why SpaceX does it this way. Starship is meant to fly to Mars, a much bigger challenge than landing two people on the Moon. If the basic Starship design can’t handle a lunar landing, it would throw the company’s whole Mars plan into question. SpaceX is committed to making Starship work, which is different from making the best possible lunar lander.

Less obvious is why NASA tolerates all this complexity in the most hazardous phase of its first moon mission. Why land a rocket the size of a building packed with moving parts? It’s hard to look at the HLS design and not think back to other times when a room full of smart NASA people talked themselves into taking major risks because the alternative was not getting to fly at all.

It’s instructive to compare the HLS approach to the design philosophy on Apollo. Engineers on that progam were motivated by terror; no one wanted to make the mistake that would leave astronauts stranded on the moon. The weapon they used to knock down risk was simplicity. The Lunar Module was a small metal box with a wide stance, built low enough so that the astronauts only needed to climb down a short ladder. The bottom half of the LM was a descent stage that completely covered the ascent rocket (a design that showed its value on Apollo 15, when one of the descent engines got smushed by a rock). And that ascent rocket, the most important piece of hardware in the lander, was a caveman design intentionally made so primitive that it would struggle to find ways to fail.

On Artemis, it's the other way around: the more hazardous the mission phase, the more complex the hardware. It's hard to look at all this lunar machinery and feel reassured, especially when NASA's own Aerospace Safety Advisory Panel estimates that the Orion/SLS portion of a moon mission alone (not including anything to do with HLS) already has a 1:75 chance of killing the crew.

artist's rendering of human landing system'

VI. Refueling

Since NASA’s biggest rocket struggles to get Orion into distant lunar orbit, and HLS weighs fifty times as much as Orion, the curious reader might wonder how the unmanned lander is supposed to get up there.

NASA’s answer is, very sensibly, “not our problem”. They are paying Blue Origin and SpaceX the big bucks to figure this out on their own. And as a practical matter, the only way to put such a massive spacecraft into NRHO is to first refuel it in low Earth orbit.

Like a lot of space technology, orbital refueling sounds simple, has never been attempted, and can’t be adequately simulated on Earth.[18] The crux of the problem is that liquid and gas phases in microgravity jumble up into a three-dimensional mess, so that even measuring the quantity of propellant in a tank becomes difficult. To make matters harder, Starship uses cryogenic propellants that boil at temperatures about a hundred degrees colder than the plumbing they need to move through. Imagine trying to pour water from a thermos into a red-hot skillet while falling off a cliff and you get some idea of the difficulties.

To get refueling working, SpaceX will first have to demonstrate propellant transfer between rockets as a proof of concept, and then get the process working reliably and efficiently at a scale of hundreds of tons. (These are two distinct challenges). Once they can routinely move liquid oxygen and methane from Starship A to Starship B, they’ll be ready to set up the infrastructure they need to launch HLS.

artist's rendering of human landing system'

The plan for getting HLS to the moon looks like this: a few months before the landing date, SpaceX will launch a special variant of their Starship rocket configured to serve as a propellant depot. Then they'll start launching Starships one by one to fill it up. Each Starship arrives in low Earth orbit with some residual propellant; it will need to dock with the depot rocket and transfer over this remnant fuel. Once the depot is full, SpaceX will launch HLS, have it fill its tanks at the depot rocket, and send it up to NRHO in advance of Orion. When Orion arrives, HLS will hopefully have enough propellant left on board to take on astronauts and make a single round trip from NRHO to the lunar surface.

Getting this plan to work requires solving a second engineering problem, how to keep cryogenic propellants cold in space. Low earth orbit is a toasty place, and without special measures, the cryogenic propellants Starship uses will quickly vent off into space. The problem is easy to solve in deep space (use a sunshade), but becomes tricky in low Earth orbit, where a warm rock covers a third of the sky. (Boil-off is also a big issue for HLS on the moon.)

It’s not clear how many Starship launches it will take to refuel HLS. Elon Musk has said four launches might be enough; NASA Assistant Deputy Associate Administrator Lakiesha Hawkins says the number is in the “high teens”. Last week, SpaceX's Kathy Lueders gave a figure of fifteen launches.

The real number is unknown and will come down to four factors:

  1. How much propellant a Starship can carry to low Earth orbit.
  2. What fraction of that can be usably pumped out of the rocket.
  3. How quickly cryogenic propellant boils away from the orbiting depot.
  4. How rapidly SpaceX can launch Starships.

SpaceX probably knows the answer to (1), but isn’t talking. Data for (2) and (3) will have to wait for flight tests that are planned for 2025. And obviously a lot is riding on (4), also called launch cadence.

The record for heavy rocket launch cadence belongs to Saturn V, which launched three times during a four month period in 1968. Second place belongs to the Space Shuttle, which flew nine times in the calendar year before the Challenger disaster. In third place is Falcon Heavy, which flew six times in a 13 month period beginning in November 2022.

For the refueling plan to work, Starship will have to break this record by a factor of ten, launching every six days or so across multiple launch facilities. [1] The refueling program can tolerate a few launch failures, as long as none of them damages a launch pad.

There’s no company better prepared to meet this challenge than SpaceX. Their Falcon 9 rocket has shattered records for both reliability and cadence, and now launches about once every three days. But it took SpaceX ten years to get from the first orbital Falcon 9 flight to a weekly cadence, and Starship is vastly bigger and more complicated than the Falcon 9. [20]

Working backwards from the official schedule allows us to appreciate the time pressure facing SpaceX. To make the official Artemis landing date, SpaceX has to land an unmanned HLS prototype on the moon in early 2026. That means tanker flights to fill an orbiting depot would start in late 2025. This doesn’t leave a lot of time for the company to invent orbital refueling, get it working at scale, make it efficient, deal with boil-off, get Starship launching reliably, begin recovering booster stages,[21] set up additional launch facilities, achieve a weekly cadence, and at the same time design and test all the other systems that need to go into HLS.

Lest anyone think I’m picking on SpaceX, the development schedule for Blue Origin’s 2029 lander is even more fantastical. That design requires pumping tons of liquid hydrogen between spacecraft in lunar orbit, a challenge perhaps an order of magnitude harder than what SpaceX is attempting. Liquid hydrogen is bulky, boils near absolute zero, and is infamous for its ability to leak through anything (the Shuttle program couldn't get a handle on hydrogen leaks on Earth even after a hundred some launches). And the rocket Blue Origin needs to test all this technology has never left the ground.

The upshot is that NASA has put a pair of last-minute long-shot technology development programs between itself and the moon. Particularly striking is the contrast between the ambition of the HLS designs and the extreme conservatism and glacial pace of SLS/Orion. The same organization that spent 23 years and 20 billion dollars building the world's most vanilla spacecraft demands that SpaceX darken the sky with Starships within four years of signing the initial HLS contract. While thrilling for SpaceX fans, this is pretty unserious behavior from the nation’s space agency, which had several decades' warning that going to the moon would require a lander.

All this to say, it's universally understood that there won’t be a moon landing in 2026. At some point NASA will have to officially slip the schedule, as it did in 2021, 2023, and at the start of this year. If this accelerating pattern of delays continues, by year’s end we might reach a state of continuous postponement, a kind of scheduling singularity where the landing date for Artemis 3 recedes smoothly and continuously into the future.

Otherwise, it's hard to imagine a manned lunar landing before 2030, if the Artemis program survives that long.

Interior of Skylab

VII. Conclusion

I want to stress that there’s nothing wrong with NASA making big bets on technology. Quite the contrary, the audacious HLS contracts may be the healthiest thing about Artemis. Visionaries at NASA identified a futuristic new energy source (space billionaire egos) and found a way to tap it on a fixed-cost basis. If SpaceX or Blue Origin figure out how to make cryogenic refueling practical, it will mean a big step forward for space exploration, exactly the thing NASA should be encouraging. And if the technology doesn’t pan out, we’ll have found that out mostly by spending Musk’s and Bezos’s money.

The real problem with Artemis is that it doesn’t think through the consequences of its own success. A working infrastructure for orbital refueling would make SLS and Orion superfluous. Instead of waiting two years to go up on a $4 billion rocket, crews and cargo could launch every weekend on cheap commercial rockets, refueling in low Earth orbit on their way to the Moon. A similar logic holds for Gateway. Why assemble a space station out of habitrail pieces out in lunar orbit, like an animal, when you can build one on Earth and launch it in one piece? Better yet, just spraypaint “GATEWAY” on the side of the nearest Starship, send it out to NRHO, and save NASA and its international partners billions. Having a working gas station in low Earth orbit fundamentally changes what is possible, in a way the SLS/Orion arm of Artemis doesn't seem to recognize.

Conversely, if SpaceX and Blue Origin can’t make cryogenic refueling work, then NASA has no plan B for landing on the moon. All the Artemis program will be able to do is assemble Gateway. Promising taxpayers the moon only to deliver ISS Jr. does not broadcast a message of national greatness, and is unlikely to get Congress excited about going to Mars. The hurtful comparisons between American dynamism in the 1960’s and whatever it is we have now will practically write themselves.

What NASA is doing is like an office worker blowing half their salary on lottery tickets while putting the other half in a pension fund. If the lottery money comes through, then there was really no need for the pension fund. But without the lottery win, there’s not enough money in the pension account to retire on. The two strategies don't make sense together.

There’s a ‘realist’ school of space flight that concedes all this but asks us to look at the bigger picture. We’re never going to have the perfect space program, the argument goes, but the important thing is forward progress. And Artemis is the first program in years to survive a presidential transition and have a shot at getting us beyond low Earth orbit. With Artemis still funded, and Starship making rapid progress, at some point we’ll finally see American astronauts back on the moon.

But this argument has two flaws. The first is that it feeds a cycle of dysfunction at NASA that is rapidly making it impossible for us to go anywhere. Holding human space flight to a different standard than NASA’s science missions has been a disaster for space exploration. Right now the Exploration Systems Development Mission Directorate (the entity responsible for manned space flight) couldn’t build a toaster for less than a billion dollars. Incompetence, self-dealing, and mismanagement that end careers on the science side of NASA are not just tolerated but rewarded on the human space flight side. Before we let the agency build out its third white elephant project in forty years, it’s worth reflecting on what we're getting in return for half our exploration budget.

The second, more serious flaw in the “realist” approach is that it enables a culture of institutional mendacity that must ultimately be fatal at an engineering organization. We've reached a point where NASA lies constantly, to both itself and to the public. It lies about schedules and capabilities. It lies about the costs and the benefits of its human spaceflight program. And above all, it lies about risk. All the institutional pathologies identified in the Rogers Report and the Columbia Accident Investigation Board are alive and well in Artemis—groupthink, management bloat, intense pressure to meet impossible deadlines, and a willingness to manufacture engineering rationales to justify flying unsafe hardware.

Do we really have to wait for another tragedy, and another beautifully produced Presidential Commission report, to see that Artemis is broken?

Notes

[1] Without NASA's help, it's hard to put a dollar figure on a mission without making somewhat arbitrary decisions about what to include and exclude. The $7-10 billion estimate comes from a Bush-era official in the Office of Management and Budget commenting on the NASA Spaceflight Forum

And that $7.2B assumes Artemis III stays on schedule. Based on the FY24 budget request, each additional year between Artemis II and Artemis III adds another $3.5B to $4.0B in Common Exploration to Artemis III. If Artemis III goes off in 2027, then it will be $10.8B total. If 2028, then $14.3B.

In other words, it's hard to break out an actual cost while the launch dates for both Artemis II and III keep slipping.

NASA's own Inspector General estimates the cost of just the SLS/Orion portion of a moon landing at $4.1 billion.

[2] The first US suborbital flight, Friendship 7, launched on May 15, 1961. Armstrong and Aldrin landed on the moon eight years and two months later, on July 21, 1969. President Bush announced the goal of returning to the Moon in a January 2004 speech, setting the target date for the first landing "as early as 2015", and no later than 2020.

[3] NASA refuses to track the per-launch cost of SLS, so it's easy to get into nerdfights. Since the main cost driver on SLS is the gigantic workforce employed on the project, something like two or three times the headcount of SpaceX, the cost per launch depends a lot on cadence. If you assume a yearly launch rate (the official line), then the rocket costs $2.1 billion a launch. If like me you think one launch every two years is optimistic, the cost climbs up into the $4-5 billion range.

[4] The SLS weighs 2,600 metric tons fully fueled, and conveniently enough a dollar bill weighs about 1 gram.

[5] SpaceX does not disclose the cost, but it's widely assumed the Raptor engine used on Superheavy costs $1 million.

[6] The $145 million figure comes from dividing the contract cost by the number of engines, caveman style. Others have reached a figure of $100 million for the unit cost of these engines. The important point is not who is right but the fact that NASA is paying vastly more than anyone else for engines of this class.

[7] $250M is the figure you get by dividing the $3.2 billion Booster Production and Operations contract to Northrop Grumman by the number of boosters (12) in the contract. Source: Office of the Inspector General. For cost overruns replacing asbestos, see the OIG report on NASA’s Management of the Space Launch System Booster and Engine Contracts. The Department of Defense paid $130 million for a Falcon Heavy launch in 2023.

[8] Rocket Lab developed, tested, and flew its Electron rocket for a total program cost of $100 million.

[9] In particular, the separation bolts embedded in the Orion heat shield were built based on a flawed thermal model, and need to be redesigned to safely fly a crew. From the OIG report:

Separation bolt melt beyond the thermal barrier during reentry can expose the vehicle to hot gas ingestion behind the heat shield, exceeding Orion’s structural limits and resulting in the breakup of the vehicle and loss of crew. Post-flight inspections determined there was a discrepancy in the thermal model used to predict the bolts’ performance pre-flight. Current predictions using the correct information suggest the bolt melt exceeds the design capability of Orion.

The current plan is to work around these problems on Artemis 2, and then redesign the components for Artemis 3. That means astronauts have to fly at least twice with an untested heat shield design.

[10] Orion/ESM has a delta V budget of 1340 m/s. Getting into and out of an equatorial low lunar orbit takes about 1800 m/s, more for a polar orbit. (See source.)

[11] It takes about 900 m/s of total delta V to get in and out of NHRO, comfortably within Orion/ESM's 1340 m/s budget. (See source.)

[12] In Carrying the Fire, Apollo 11 astronaut Michael Collins recalls carrying a small notebook covering 18 lunar rendezvous scenarios he might be called on to fly in various contingencies. If the Lunar Module could get itself off the surface, there was probably a way to dock with it.

For those too young to remember, Tang is a powdered orange drink closely associated with the American space program.

[13] For a detailed (if somewhat cryptic) discussion of possible Artemis abort modes to NRHO, see HLS NRHO to Lunar Surface and Back Mission Design, NASA 2022.

[14] This is my own speculative guess; the answer is very sensitive to the dry weight of HLS and the boil-off rate of its cryogenic propellants. Delta V from the lunar surface to NRHO is 2,610 m/sec. Assuming HLS weighs 120 tons unfueled, it would need about 150 metric tons of propellant to get into NRHO from the lunar surface. Adding safety margin, fuel for docking operations, and allowing for a week of boiloff gets me to about 200 tons.

[15] The main safety issue is the difficult thermal environment at the landing site, where the Sun sits just above the horizon, heating half the lander. If it weren't for the NRHO constraint, it's very unlikely Artemis 3 would spend more than a day or two on the lunar surface.

[16] The ISS program has been repeatedly extended, but the station is coming up against physical limiting factors (like metal fatigue) that will soon make it too dangerous to use.

[17] Recent comments by NASA suggest SpaceX has voluntarily added an ascent phase to its landing demo, ending a pretty untenable situation. However, there's still no requirement that the unmanned landing/ascent demo be performed using the same lander design that will fly on the actual mission, another oddity in the HLS contract.

[18] To be precise, I'm talking about moving bulk propellant between rockets in orbit. There are resupply flights to the International Space Station that deliver about 850 kilograms of non-cryogenic propellant to boost the station in its orbit, and there have been small-scale experiments in refueling satellites. But no one has attempted refueling a flown rocket stage in space, cryogenic or otherwise.

[19] Both SpaceX's Kathy Lueders and NASA confirm Starship needs to launch from multiple sites. Here's an excerpt from the minutes of the NASA Advisory Council Human Exploration and Operations Committee meeting on November 17 and 20, 2023:

Mr. [Wayne] Hale asked where Artemis III will launch from. [Assistant Deputy AA for Moon to Mars Lakiesha] Hawkins said that launch pads will be used in Florida and potentially Texas. The missions will need quite a number of tankers; in order to meet the schedule, there will need to be a rapid succession of launches of fuel, requiring more than one site for launches on a 6-day rotation schedule, and multiples of launches.

[20] Falcon 9 first flew in June of 2010 and achieved a weekly launch cadence over a span of six launches starting in November 2020.

[21] Recovering Superheavy stages is not a NASA requirement for HLS, but it's a huge cost driver for SpaceX given the number of launches involved.

Read the whole story
sarcozona
3 hours ago
reply
Epiphyte City
Share this story
Delete

‘They call us the fatherless ones’: the trauma of families devastated by the infected blood scandal will last for generations

1 Share

On the day of her uncle’s funeral in 1995, Jane’s life changed forever.* That was when she found out her uncle Edward, a person with haemophilia, had been infected with human immunodeficiency virus (HIV) from the treatment he was taking for his condition.

Adding to the family’s pain, the stigma that surrounded HIV and the disease it causes, Aids – because of its association with homosexuality and drug addiction – meant they kept the cause of Edward’s death to themselves. At the same time, they knew that Jane’s father, Roy, also had haemophilia and had been receiving the same treatment as his brother.

A rare genetic condition means that throughout their lives, people with haemophilia – of whom there are around 6,000 in the UK – must seek medical care when they bleed because one of their key blood clotting proteins, factor VIII or IX, is either partly or completely missing. In the 1970s and 80s, a new treatment to give people with haemophilia their missing protein using concentrated blood plasma was seen as potentially life-changing. In fact, it dealt many of them a death sentence.

Read more: Infected blood scandal – what you need to know

The factor VIII concentrate was supplied by US pharmaceutical companies. Donors were paid for their blood, and much of it came from communities at higher risk of carrying infectious disease, including drug addicts and people in prison.

Gradually, haemophilia communities on both sides of the Atlantic noticed some among them were getting sick from a mysterious new virus. The first death of a person with haemophilia from Aids occurred in the US state of Florida in January 1982. The following year, both the Lancet medical journal and the World Health Organization published recommendations that people with haemophilia should be warned of the new health risks they faced – which also included infection with hepatitis C, a potentially deadly virus that affects the liver. Yet no such warnings were given.

While Edward soon became ill with HIV, Jane’s father did not reveal his hepatitis C infection, even to his daughter, until she was 18. He later died from liver cancer. Jane recalls:

My dad died ten years ago now – it’s nearly his anniversary. When he died, I went back to the doctors and said: ‘Do you think the hepatitis has caused the issues with his liver?’ The room fell silent. I didn’t need an answer. Their body language, their silence, told me everything I needed to know.

Jane says her father’s mistrust of doctors and medical advice meant he avoided the factor VIII treatments unless he really needed them, and “in some respects that prolonged his life” by limiting the amount of infected concentrate he was subjected to. One of Jane’s earliest memories is of him refusing to go to hospital, despite intense pain from a bleed into his joint. But each of these bleeds caused new damage to Roy’s body, resulting in increasing pain and disability as his life went on.

This article is part of Conversation Insights
The Insights team generates long-form journalism derived from interdisciplinary research. The team is working with academics from different backgrounds who have been engaged in projects aimed at tackling societal and scientific challenges.

The societal stigma surrounding Aids meant many people with haemophilia lived with their infections in silence – assuming, that is, they were aware of their diagnosis. Another shocking aspect of this global contaminated blood scandal is that often, the victims weren’t being told the truth themselves.

During a recent conversation with her mother, Jane discovered that, for a long time, her father and uncle had not been told of their infections by doctors who by then knew about the problem of contaminated blood, leaving her family at risk of catching hepatitis C and her uncle at risk of passing on HIV. In her father’s case, it was only when, in 2004, he was notified by the NHS that factor VIII concentrate carried a very small risk of Creutzfeldt-Jakob disease (CJD) – a rare and fatal brain disease better known in the UK as “mad cow disease” – that he was informed this was because of his hepatitis C infection. Jane recalls:

My dad was like: ‘Excuse me, what?’ It was the same for my uncle Edward. There was no formal notification [of his HIV diagnosis] – the doctors and nurses just suddenly started wearing a lot of blue gloves around him.

Jane’s own story encapsulates the multigenerational impact of the infected blood scandal, which I (Sally-Anne) have researched with colleagues at the University of Gloucestershire. Jane carries the haemophilia gene, which is passed from mother to son with a 50% chance, and one of her two sons has haemophilia. Jane recalls the moment she told her father Roy, who was already infected and unwell with hepatitis C, that she was having a son:

We bought a blue romper suit and I took it home and gave it to my dad. He opened the bag and just threw it back at me. He went: ‘No, I can’t deal with this.’ And that’s not okay – he should have been proud, excited.

When Jane’s son was born, it was difficult for the family to face up to the treatments for haemophilia that would be a regular part of his life. She recalls her father “holding our newborn child, begging me not to ever let him have these treatments”.

‘A criminal cover-up on an industrial scale’

The infection of people with haemophilia is just one aspect of the global contaminated blood scandal – which in the UK is regarded as the “worst treatment disaster in the history of the NHS”. In total, around 30,000 NHS patients were infected with HIV and hepatitis C between 1970 and 1991, either through contaminated blood products such as factor VIII and IX or blood transfusions during surgery, treatment and childbirth.

Recently Sam Roddick, daughter of Body Shop founder Anita Roddick, wrote in the Sunday Times about a “chain of decisions that were morally unlawful” which led to her mother contracting hepatitis C from a blood transfusion after giving birth to Sam in 1971. The blood used for transfusions, which is donated for free in the UK, was not routinely screened for HIV until 1986 and hepatitis C only five years after that.

One person still dies every four days in the UK as a result of having received contaminated blood. An estimated 26,800 people became infected with hepatitis C and 1,243 with HIV. Of those infected with HIV, 380 were children – more than half of whom have died. Following earlier inquiries by Lord Archer and the Scottish government (which was branded a “whitewash” by some of those affected), the UK’s infected blood public inquiry was finally announced by the then-UK prime minister, Theresa May, in July 2017. She called the scandal an “appalling tragedy which should simply never have happened” – adding:

Today will begin a journey which will be dedicated to getting to the truth of what happened and in delivering justice to everyone involved.

A few months earlier, in his final speech as an MP in April 2017, Labour’s health secretary Andy Burnham had described the scandal as a “criminal cover-up on an industrial scale”, suggesting there might be a case for corporate manslaughter charges. Of people like Jane’s father and uncle with haemophilia, Burnham said:

The Department of Health, and the bodies for which it is responsible, have been grossly negligent of the safety of people in the haemophilia community over five decades.

Like so many family members, Jane’s life plans as a young woman were turned upside down by her father’s illnesses. One of hundreds of witnesses heard during the seven-year inquiry, Jane wants the long-awaited final report, which will be published on May 20, to recognise the suffering of all those affected by the scandal, explaining:

I don’t think there’s been any real recognition for the families and what they’ve been through. People and families in particular have been destroyed by this. I was at university trying to be a teacher but dropped out, much to my university’s dismay. I wanted to be at home to stay with dad. There’s a generation of us that have lost our families – they call us ‘the fatherless ones’.

Many of those affected by the scandal blame the UK government and NHS trusts who they claim knew but did not share information about a potential infection risk with those taking the new treatment.

Deaths, loss, and continued denial

In January 1982, one of the UK’s leading experts in haemophilia, Arthur Bloom, co-wrote an infamous letter to haemophilia centres throughout the country, telling them that it was very important to ascertain whether a new American blood product already being given to people with haemophilia in the UK showed reduced levels of hepatitis C. “As far as we know,” he wrote, “the products have been subjected to a heat treatment process”, adding:

Although initial production batches may have been tested for infectivity by injecting them into chimpanzees, it is unlikely that the manufacturers will be able to guarantee this form of quality control for all future batches.

This method of producing factor VIII protein involved taking large amounts of blood (up to 40,000 units) from many different people and reducing this to a concentrate that could be easily self-injected at home. Bloom suggested “the most clearcut way” of testing the infectivity of the new heat-treated product was on patients requiring treatment who had not been previously exposed to large-pool concentrates – including children.

One of the children treated by Bloom himself at the University Hospital of Wales was Colin Smith, who had haemophilia and weighed just 13 pounds when he died of Aids in 1990 at the age of seven. He was a year old when he was given the factor VIII treatment, and his HIV status was confirmed at two-and-a-half. The stigma of HIV meant the family were shunned by many in their community, including having the words “Aids dead” painted on the side of their house in six-foot high letters. As Colin’s mother, Janet Smith, recently told BBC Wales:

We were known as the Aids family … We’d have phone calls at 12, one o'clock in the morning, saying: ‘How can you let him sleep with his brothers? He should be locked up, he should be put on an island’… He was three.

The same BBC investigation found evidence that Bloom had ignored internal NHS guidelines, written by his own department, that discouraged the use of the imported factor VIII treatment on children because of the risk of infection. Bloom was clearly aware of the risks when he began treating Colin in the autumn of 1983. “This wasn’t an accident,” Colin’s father said. “It could have been avoided.”

None of the young patients, known as “previously untreated patients”, or their parents knew they were part of a nationwide experiment at the time. Documents subsequently released reveal that the UK government funded some of these studies – including one of pupils at Treloar’s College, a specialist school in Hampshire with an NHS Haemophilia unit on site. Of 122 pupils with Haemophilia attending the school between 1974 and 1987, to date 75 have are reported to have died as a result of HIV and hepatitis C infections.

By 1984 – just over two years after the first death from Aids in the UK – government experts were aware that people receiving American factor VIII blood concentrate were at risk of HIV infection. Yet despite the mounting evidence, denials and silence continued well into the 1990s.

Trevor Graham, one of the hundreds of contributors to the infected blood inquiry, spoke to us about his father, who had haemophilia and died in 1991 when Graham was only 13. “We had no idea at the time he had died of Aids,” Graham explains. “We thought he died of a brain haemorrhage, as that was what the doctors treating dad at the Manchester Royal infirmary told my mother.”

Yet for the four years before his death, Graham’s father had been unable to work and sought the support of the Macfarlane Trust, a discretionary grant-making trust that was set up and funded by the then-Department of Health to “alleviate the financial needs of those haemophiliacs infected with HIV through contaminated NHS blood products”, and also their families. Graham says:

It is heartbreaking to read the letters my dad wrote requesting assistance, one of which states that he was concerned about Christmas presents for myself and my sister. In that letter, he stated he was HIV positive and couldn’t work as a result of his infection.

Despite there being no reference to HIV on their father’s death certificate, Graham says rumours soon spread around their school and local community. Once again, the legacy of this infection continues to affect following generations:

My sister and I were bullied at school. People said that our dad was gay and that he died of Aids. Mum became agoraphobic when I was 13 and was advised to see a psychiatrist, but in her grief she refused. I was suffering from hidden anxiety as a young teenager and developed a stutter. The anxiety and bouts of depression have never left me since my dad passed away. Even 30 years later, I still struggle with my mental health.

A monster arrives

“The monster arrived as a wolf in sheep’s clothing,” writes Elaine DePrince in her moving memoir about the contaminated blood scandal in the US, Cry Bloody Murder. The monster was factor VIII concentrate created from blood infected with HIV and hepatitis C. Three of her sons had haemophilia; all three would die slow, painful deaths due to Aids, having been infected by the treatment that was meant to help them lead normal lives:

When Teddy died, he was the last of our three boys with hemophilia and Aids to leave us. He was the last of our three little boys, our three musketeers … He was 24 years old, and it seemed like he had lived forever with Aids.

In the book, DePrince, whose family were living in a suburb of Philadelphia, describes an earlier conversation with her husband when a warning label finally appeared on vials of factor VIII concentrate. She pointed out there was no need to worry, as all three of their sons with haemophilia were already infected with HIV.

As their youngest son Cubby’s condition worsened, he wrote a list to ease his concerns about other children getting Aids, at a time when it was untreatable, entitled “64 reasons why you do not want to get AIDs”. These included:

If your liver gets too big, you have to sit half-lying down and half-sitting up. Then it’s hard to paint your model airplanes because the paint drips on your stomach.

The battle to gain justice took DePrince from writing letters to campaigning for a change in the law and writing a book to explain the reality of the contaminated blood scandal and her family’s suffering from it. She concludes:

I cannot repress my sorrow, my pain, and my rage … The FDA [US Food & Drug Administration] failed my children. The blood-banking industry failed them. Government agencies failed them. The law failed them.

Jonathan is a haematologist in the US who comes from a family of men with haemophilia. When he was around seven years old in 1989, both his uncles were infected with HIV. One died in 1992 and the other shortly afterwards. “Our family and the haemophilia community were ravaged – we lost an entire generation. I had to watch my uncles deteriorate over the years.”

Jonathan, who also has haemophilia, grew up in a rural suburb in Illinois. He reflects on how that made getting treatment all the harder for his uncles:

It turns out that not only was there the contaminated supply that ravaged an entire generation of people with haemophilia and other severe bleeding disorders, but there wasn’t even equal access to care in the US at that time. Growing up in the Midwest, we didn’t have the same HIV therapies available on the east and west coasts of the US, where HIV research was being done. Some of the medical innovations at that time really did not penetrate the heartland of the US like it did on the coasts. So, I just had to watch my uncles deteriorate.

Jonathan himself was “only” infected with hepatitis C from his treatment. He says “that actually made me feel guilty – why was I spared [from HIV and Aids]? You know, everyone else is dying. Why should I be alive?”

The experience drove him to become a doctor in haematology, in order to try to make the experience better for other families like his:

People have been left to suffer. I grew up not knowing if I was going to live. The sad thing now, being a physician, is that HIV is such a manageable disease now.

The fight for justice

Across the world, many people have devoted their lives to fighting for justice for all those affected by the contaminated blood scandal. In the UK, groups such as TaintedBlood, Birchgrove Group, Factor 8, BloodLoss Families, Contaminated Blood Campaign, Contaminated Whole Blood UK and many others have continued the brave battles of the early whistleblowers and campaigners.

Jason Evans’ father Jonathan, who had haemophilia, was infected with HIV and hepatitis C and died in 1993 aged 31, when Evans was four. He has been campaigning for justice for his father and others for more than a decade, using freedom of information acts to reveal documents relating to the scandal. In one shocking memo from 1985, a UK government official discussed the financial implications of the fact that many people with haemophilia who were infected with HIV would soon die:

Of course, the maintenance of the life of a haemophiliac is itself expensive, and I am very much afraid that those who are already doomed will generate savings which more than cover the cost of testing blood donations.

Evans, the founder and director of the campaign group Factor 8, is leading a legal action against the UK government for more than 500 people. Their action resulted in permission to launch a High Court action to seek damages but is currently on hold pending the outcome of the current inquiry on May 20.

Evans has expressed concern that ministers are “seeking to water down” the inquiry’s strong recommendations from the interim reports. He recently told the Guardian:

What I want from the inquiry is it finally to be on the official record that what happened was entirely preventable and was motivated by unethical practices. For decades, the line from government was that this was an unavoidable accident that no one could have possibly have foreseen – that no one did anything wrong.

In November 2022, “interim” compensation payments of £100,000 each were made to around 4,000 infected people or their bereaved partners in the UK (on top of an “ex gratia payment” by the government in 1990 of £20,000 or £25,000, depending how badly a patient’s body had been damaged by their infection). But this has left many others affected by the scandal, including those who have lost their children or parents, without any compensation – along with those whose death left nobody behind to claim.

However, a recent amendment to the Victims and Prisoners Bill added a requirement for the UK government to set up a compensation scheme within three months of it passing on May 1. On May 5, The Times reported that ministers were preparing a compensation package of £10 billion minimum for contaminated blood victims; the details are to be announced after the public inquiry’s report is released.

Two court cases are in progress in the UK: the one led by Evans, and another against Treloar College by 36 former students, who claim the experiments on them breached its duty of care by giving the treatment without discussing the risks with the students or their parents. In 2023, in testimony to the inquiry, the college’s former headteacher, Alec Macpherson, admitted that doctors at the school were “experimenting with the use of factor VIII”.

Elsewhere, criminal proceedings were brought against government officials and executives in pharmaceutical companies as long ago as the 1990s, with French and Japanese officials being given prison sentences. In 1997, Bayer and the other three manufacturers of the factor VIII concentrate paid out a total of US$660 million (around £1 billion in today’s prices) to the estimated 6,000 people with haemophilia who were infected in the US.

There is also the potential for criminal charges or other consequences for those involved in the UK scandal. It is possible that those identified as responsible may be charged with gross negligence manslaughter, and, in the case of collective fault of an organisation, corporate manslaughter charges could be brought. Individuals who supplied the contaminated blood could be prosecuted for grievous bodily harm.

Campaigners often use the phrase “justice delayed is justice denied” – not least for the one person infected with contaminated blood who continues to die every three days in the UK. But the effects of this medical scandal will be felt for years and generations to come – and whatever the outcome of the inquiry, campaigners will continue to fight for justice. As Evans explained when he was nominated for an award in 2021:

I think something that fuelled our renewed campaign was a new energy, particularly from those whose parents had died. We were grown up now and we were angry. I think that energy spread to the older campaigners who had been let down by the government time and time again.

This complex, seven-year inquiry was forced to delay its final report for five months to allow the many people and organisations referenced sufficient time to respond. Some victims have found out things they did not know about their treatment. Others have called for national memorials for the victims in each UK countries – including one specifically for the children infected at Treloar College.

The inquiry has affected people in different ways. Some have felt compelled to attend every sitting. Harrowing testimony has been heard throughout – not least when Colin and Janet Smith spoke about their son Colin, the youngest person to have been infected in the UK. His father told the inquiry:

There’s no way a child should have to die the way Colin did. It wasn’t pleasant. It still affects us now. But it’s not just our son – there’s lots of children who have had to go through that … I would cope with death, but not with the death of my son. I still have trouble today; the fact that he’s in a grave on his own. The guilt will never go away.

*Some names in this article are pseudonyms, created to protect the identity of our interviewees.

For you: more from our Insights series:

To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.

Read the whole story
sarcozona
1 day ago
reply
Epiphyte City
Share this story
Delete

Mass production of ornamentation and its recent decline | MetaFilter

1 Share

I am grateful that my house was built in 1876, because the roofline is very ornate, and the interior of the house has all sorts of plaster crown moulding. Do I love plaster from a home maintenance perspective? I do not. But the moulding is gourgeous. There are also some killer ceiling medallions.
posted by grumpybear69 at 12:20 PM on May 17 [3 favorites]

Paint the goddamn things for fucks sake. Every single fuckin photo in there looked naked, where the fuck are the painters!! Why is nothing painted in the world anymore, and if it is, the blandest mono colour they can manage. There are hundreds of millions of paintings roving around the world with nothing to paint on. Every grey concrete suface you see is a sign of failure, a blank canvas humanity has organized itself so pathetically comically poorly that the surface will never be painted, and if it is, the society will bizarrely pay to remove it, determined to have as boring and ugly as fuck world as this wretched and wonderful species wants it to be. I'm sick of trying to appreciate the "natural" beauty of stone and cement and I'm sick of acting like "natural" means anything, especially in the context of an artificial contraction and covering them with paints that are equally as natural as everything a human ever uses, or whatever that irritating term was ever meant to mean. Sloppy aimless rant but the oppressive greys of this world really gets me red in the face.
posted by GoblinHoney at 12:38 PM on May 17 [21 favorites]

It's a good article. Not sure if I agree with its conclusions, but thought-provoking and worth engaging with.
posted by biogeo at 1:40 PM on May 17 [2 favorites]

My dream is that sometime soon green ornamentation will become the new modern, new building design will incorporate creative ways for plants to ornament as many external surfaces as possible, and buildings without it will look naked and old-fashioned.
posted by trig at 2:02 PM on May 17 [9 favorites]

The Baha'i temple is quite beautiful. Part of what makes it so is that it is itself a kind of ornamentation, overlooking the lakeshore.
posted by HearHere at 2:09 PM on May 17 [3 favorites]

This is fantastic, and a beautiful website I had never seen before - thank you!
posted by superelastic at 2:35 PM on May 17 [1 favorite]

That’s a really good point, rebent. You can buy a fiberglass Corinthian column for your front porch for a couple hundred bucks, but there’s no cheap way to obtain a 20-foot floor-to-ceiling window.
posted by Just the one swan, actually at 3:39 PM on May 17 [1 favorite]

This article suffers from a sort of humanities version of engineers' disease. It's all about the details rather than the actual understanding of the underlying problem. From ancient times ornamentation has had several functions. The most basic architectural function is that when two building components meet, there will be some sort of a seam (I'm not sure I'm using the correct terminology in English, but I hope you get my point), and this seam was very hard to make perfect. This was not only an aesthetic issue, cracks are where the light gets in, but also where moisture, dirt, cold air and pests get in. So you would cover the seam with a profile that could in a way connect and close the components. This why one would get the most ornamentation everywhere things met: around doors and windows, where the wall met the ceiling and the floor, and around the hooks that carried chandeliers or wall-mounted lamps. On the outside of the building, the critical points were where the building met the ground, and where the walls met the roof, and again around openings. For architects, it could be very interesting to use these details as a form of expression. The classical orders represented different human or godly properties, like strength or bounty.

The orders are very interesting. Vitruvius, writing in the 1st century, describes 3. I think we have a couple more that are broadly seen as classical, and then there of course plenty others in other cultures. But the point of orders are that they define a system. Imagine a building site in classical antiquity. The architects and clients and some other people were very learned people with lots of knowledge of international architecture that they found through travels. But the majority of workers were illiterate. There were no blueprints, and while there definitely were drawings and models on site, they weren't spread out all over the place. So there had to be a common language that could be conveyed to everyone on site: the orders. An architect and contractor could enter the site and tell everyone: we are going to build a Doric temple, with these basic measurements like this model, and then everyone would know how to do, because the system covered every aspect of the building: the general outline, the columns and their decorations and all the other details of the building. Very cool.

Then on top of the system, there were the functions of symbolism, including showing one's wealth and/or purpose in life. This is where the decorations on the surfaces come in, including stained glass from the late Middle Ages onward. Sometimes the client would have a very strong desire to have narratives in the space, and downplay the spatial interest in favor of rich paintings or tapestries. The Sistine Chapel is pretty boring, spatially, and I am inclined to believe this was on purpose, because the Popes really wanted to send a message through the imagery. But the images could also take the form of carvings or stucco. All of these images had their own life, independently of the architecture, though the artists and artisans would most often work with the space in different ways. Also, the decorations weren't always figurative, since color and materials had meanings in themselves. In Islamic architecture, depictions of humans and animals are often not allowed, so the decoration may be a mix of calligraphy and geometric designs, both praising God.

All good. This hierarchy of a construction system (which obviously changed over time) and a meaningful decoration worked fine for at least 4000-ish years. Then during the 18th century it began to fall apart, mostly because of the beginning industrialization, but in the beginning NOT because of the industrialization of building parts, but because of the new generations of wealthy people who felt less attached to the old orders. This is not a pun. There is a reason we use order to describe societal rigor as well as architectural systems. The people of the enlightenment were not convinced that the old systems and moral narratives were appropriate ways of understanding the world, and they began to challenge the conventions, with oriental follies and decorations that had no other meaning than to delight the spectator. Classicism didn't disappear, but it became a style, alongside all the other historical and global styles.

Then during the 19th century, building components did become industrialized, and relatively cheap. Everyone could have all the ornaments, and they mostly did. But in that new context, the original purposes and meanings of the ornaments and decorations were almost entirely lost. Ornaments were just thrown randomly all over facades and interiors. There were some heroic attempts to return to order, for instance by Louis Sullivan in Chicago and Adolf Loos in Vienna. Contrary to how they are read today, they were both architects who fully mastered their order and ornamentation. People forget that the original purpose of the Bauhaus was to educate artisans to build future cathedrals. And there are still architects who work in that tradition. But mostly it was a vulgar mess and a lot of really bad construction. The reason we don't know so much about it is that a lot of 19th century buildings have been torn down because they were unsafe.

Young architects during the first decades of the 20th century dreamt of returning to the local vernacular architectures of the different regions. Using local materials and methods and letting the meaning grow out of the proces and functions.

After WW1, some realized that things were completely different. The shapes of the old orders had grown organically out of timber and stone construction. What could it mean that construction in the future would be based on steel and concrete? What properties do these materials have that in their own way can form the basis of a new organic order? They knew that it was possible to make a cast-iron Corinthian column, but also that that column would lack the beauty and precision of a column carved in stone. They knew it was possible to cast a profile in concrete, but also that it would lack the luminance and delicacy of a plaster molding. On the other hand, they know from engineering works that specially steel could accommodate a very high degree of precision in assembly, even when it was standard components. And that concrete could be shaped into organic forms that had never been seen before.

In these buildings, ornaments would have undermined the narrative and the order.

This is too long, but I need to write a little bit about the curtain wall. Most buildings now are built on the principles of the curtain wall, even if the walls are made of concrete and bricks. The main idea is to separate the load-bearing structure from the facade. This means there as few places as possible where heat can be transferred from inside to outside and outside to inside, which saves money on AC and heating. The curtain wall can have any form of decoration you want, and sometimes it can even serve a purpose, in filtering sunlight or protecting privacy.

But contemporary architecture is struggling with the same problems as that of the ancients: there are so many seams everywhere, and they are problematic. Someone needs to do something. A lot of the perceived uglyness of contemporary construction is about the poor quality and all the issues that arise from unsolved problems. Modernism has become a style, just like classicism, and it has lost its original meaning. It's OK to hate it. But I don't think the article in the OP understands why.

posted by mumimor at 5:27 PM on May 17 [26 favorites]

While wip owner is Libertarian I doubt Sam is a card-carrying New Urbanist (I have designed with the NU crowd); his twt feed is just too diverse, he's not always skeptical... I nearly forgot I'm supposed to filter!

Oh fuck! he's a Tufton Street man. His twit bio has employers as @CPSThinkTank Centre for Policy Studies (CPS, founded 1975 by Sir Keith Joseph Baron - the brains behind Thatcherism [wikipedia] (Along with Patrick Minford). That puts Stripe in the same orbit.

This means all of Works in Progress should be treated as an astroturf. As a designer Works in Progress and its contents is VERY attractive. I think it will suck a lot of people in.

CPS is very tightly aligned with the Christian fundamentalist US Council for National Policy [ <a href="http://splcenter.org" rel="nofollow">splcenter.org</a> ] - (founded 1981). Link has a whole rogues gallery so CW applies.

Re the fall of bulding ornament: I suspect Samuel Hughes has deliberately omitted (as he is thorough - and pedantic) the deeper real financial reason (article has an odd two-thrtead structure when I read it again). I put this on his tweet but during my degree (and since) I've dug deeply into an argument by James Russell in a June 2003 Architectural Record article Leading the Money. [ I have a .pdf as it's extremely hard to find] Russell cites Chris Leinberger @ChrisLeinberger:

"The real difference between the prewar era and now, he contends, is that investors then expected to reap their rewards over a very long time - and did.".

Leinberger (who seems a very secular and anti Trump pewrson - which makes me feel better for New Urbanism as opposed to the CSP) was doing interesting developments in Albuquerque at the time based on treating buildings as nested tranches with different-age returns, in order to set a building up (like a pre-1930's one) where it would be worthwhile upgrading every 30+ years. And to invest more money into a higher street-facing facade/frontage, and gain a longer, higher-level lease from this finer ornamentation.

posted by unearthed at 9:29 PM on May 17

The article makes a lot more sense in light of the political stuff. Hughes is anti-modernity, aesthetically and politically.
posted by vitia at 9:51 PM on May 17 [1 favorite]

« Older ‘He likes scaring people’   |   Teruna Jaya (gamelan animated graphical score) Newer »

Read the whole story
sarcozona
1 day ago
reply
Epiphyte City
Share this story
Delete

The beauty of concrete - Works in Progress

1 Share

One of the unifying features of architectural styles before the twentieth century is the presence of ornament. We speak of architectural elements as ornamental inasmuch as they are shaped by aesthetic considerations rather than structural or functional ones. Pilasters, column capitals, sculptural reliefs, finials, brickwork patterns, and window tracery are straightforward examples. Other elements like columns, cornices, brackets, and pinnacles often do have practical functions, but their form is so heavily determined by aesthetic considerations that it generally makes sense to count them as ornament too.

Ornament is amazingly pervasive across time and space. To the best of my knowledge, every premodern architectural culture normally applied ornament to high-status structures like temples, palaces, and public buildings. Although vernacular buildings like barns and cottages were sometimes unornamented, what is striking is how far down the prestige spectrum ornament reached: our ancestors ornamented bridges, power stations, factories, warehouses, sewage works, fortresses, and office blocks. From Chichen Itza to Bradford, from Kyiv to Lalibela, from Toronto to Tiruvannamalai, ornament was everywhere.

Since the Second World War, this has changed profoundly. For the first time in history, many high-status buildings have little or no ornament. Although a trained eye will recognize more ornamental features in modern architecture than laypeople do, as a broad generalization it is obviously true that we ornament major buildings far less than most architectural cultures did historically. This has been celebrated by some and lamented by others. But it is inarguable that it has greatly changed the face of all modern settlements. To the extent that we care about how our towns and cities look, it is of enormous importance.

The naive explanation for the decline of ornament is that the people commissioning and designing buildings stopped wanting it, influenced by modernist ideas in art and design. In the language of economists, this is a demand-side explanation: it has to do with how buyers and designers want buildings to be. The demand-side explanation comes in many variants and with many different emotional overlays. But some version of it is what most people, both pro-ornament and anti-ornament, naturally assume.

However, there is also a sophisticated explanation. The sophisticated explanation says that ornament declined because of the rising cost of labor. Ornament, it is said, is labor-intensive: it is made up of small, fiddly things that require far more bespoke attention than other architectural elements do. Until the nineteenth century, this was not a problem, because labor was cheap. But in the twentieth century, technology transformed this situation. Technology did not make us worse at, say, hand-carving stone ornament, but it made us much better at other things, including virtually all kinds of manufacturing and many kinds of services. So the opportunity cost of hand-carving ornament rose. This effect was famously described by the economist William J Baumol in the 1960s, and in economics it is known as Baumol’s cost disease.

To put this another way: since the labor of stone carvers was now far more productive if it was redirected to other activities, stone carvers could get higher wages by switching to other occupations, and could only be retained as stone carvers by raising their wages so much that stone carving became prohibitively expensive for most buyers. So although we didn’t get worse at stone carving, that wasn’t enough: we had to get better at it if it was to survive against stiffer competition from other productive activities. And so the labor-intensive ornament-rich styles faded away, to be replaced by sparser modern styles that could easily be produced with the help of modern technology. Styles suited to the age of handicrafts were superseded by the styles suited to the age of the machine. So, at least, goes the story.

This is what economists might call a supply-side explanation: it says that desire for ornament may have remained constant, but that output fell anyway because it became costlier to supply. One of the attractive features of the supply-side explanation is that it makes the stylistic transformation of the twentieth century seem much less mysterious. We do not have to claim that – somehow, astonishingly – a young Swiss trained as a clockmaker and a small group of radical German artists managed to convince every government and every corporation on Earth to adopt a radically novel and often unpopular architectural style through sheer force of ideas. In fact, the theory goes, cultural change was downstream of fairly obvious technical and economic forces. Something more or less like modern architecture was the inevitable result of the development of modern technology.

I like the supply-side theory, and I think it is elegant and clever. But my argument here will be that it is largely wrong. It is just not true that twentieth-century technology made ornament more expensive: in fact, new methods of production made many kinds of ornament much cheaper than they had ever been. Absent changes in demand, technology would have changed the dominant methods and materials for producing ornament, and it would have had some effect on ornament’s design. But it would not have resulted in an overall decline. In fact, it would almost certainly have continued the nineteenth-century tendency toward the democratization of ornament, as it became affordable to a progressively wider market. Like furniture, clothes, pictures, shoes, holidays, carpets, and exotic fruit, ornament would have become abundantly available to ordinary people for the first time in history.

In other words, something like the naive demand-side theory has been true all along: to exaggerate a little, it really did happen that every government and every corporation on Earth was persuaded by the wild architectural theory of a Swiss clockmaker and a clique of German socialists, so that they started wanting something different from what they had wanted in all previous ages. It may well be said that this is mysterious. But the mystery is real, and if we want to understand reality, it is what we must face.

Manufacturing ornament before modernity

The supply-side theory has two parts: a story about how ornament was handcrafted before modernity, and a story about how this wasn’t compatible with rising labor costs. Strikingly, a part of the first story is untrue: far from relying on bespoke artisanal work, many premodern builders used certain kinds of mass production whenever they could. But overall, the supply-side story is still an accurate description of this period: although premodern builders used labor-saving methods where possible, their opportunities for doing so were limited by low populations, low incomes, and poor transport technology, and until modern times, making ornament really was pretty labor-intensive.

There are two main methods of making ornament: carving and casting.

Carving involves removing material until only the desired form remains; casting involves shaping a material into the desired form while it is soft and then hardening it. Not all architectural ornament is produced in these ways (for example, wrought ironwork and ornamental brickwork are not), but a surprisingly high proportion is, so I shall focus on these two methods here.

First, carving. From the Renaissance to the nineteenth century, the creation of carved ornament went through several stages in a method called indirect carving. First, a design for the ornament was hand drawn by an architect and modeled in clay by a specialist craftsman called an architectural modeler. Because clay models fall apart when they dry out, it might then be cast in plaster for durability. The design would then be laboriously transferred to a block of stone or wood using something called a pointing machine, a framework of needles calibrated to points on the model so that they show exactly how much of the stone or wood has to be drilled and chiseled away to replicate its form (search YouTube for ‘pointing machine’ to find many videos of these). This carving work was done by hand by a second group of skilled craftsmen. The actual designers would probably never touch either the model or the final product.

Even figure sculpture was produced using a version of this method: the sculptor would model the statue in clay, then craftsmen would transfer the design to stone, often via an intermediate plaster cast. The indirect carving of sculpture dates back to antiquity, and many of the most famous antique statues are Roman copies of Greek originals, including the Apollo Belvedere and the Venus de Medici. Indirect carving faded away in the Middle Ages but was revived in the Renaissance and improved steadily in the following centuries. Initially, indirect carving was used to get the figures roughly right, after which the sculptor would take over to execute the details. But by the later eighteenth century, pointing machines were so good that many sculptors did little work on the actual statue: sculpting was basically an art of modeling in clay, and carving was a sophisticated but largely mechanical process. Canova, Thorvaldsen, and Rodin all worked this way. The stone sculptures that adorn the centers of old Euro­pean and American cities are mostly stone copies of plaster copies of long-lost clay originals.

Indirect carving enables a limited sort of mass production. It makes it possible to get far more out of one scarce factor of production, namely talented designers. This has some value with figure sculpture: there seem to have been carving factories in the Roman Empire mass-producing copies of the most admired statues. But it really comes into its own with other architectural ornament. The Palace of Westminster is covered with tens of thousands of square meters of extraordinarily ingenious and coherent ornament. This is not because Victorian London was awash with carver- sculptors of genius. It is because virtually every detail of the enormous building, down to the last molding profile, was designed by one man, the strange and brilliant Augustus Pugin. Pugin carved nothing, but he produced an immense flood of drawings, which were executed in stone and wood by numberless other hands. Indirect carving made Pugin many thousands of times more productive than he could have been otherwise.

The prevalence of indirect carving shows that premodern builders were keen to rationalize the production process where possible. But the sketch above also shows how labor-intensive carving remained. Premodern machinery had allowed a tiny number of elite architects to design a relatively huge amount of ornament. But the rest of the carving process was largely manual and bespoke as late as the nineteenth century, using much the same tools as the ancient Greeks, and requiring a huge workforce. Perhaps surprisingly, technology revolutionized the productivity of the creative artist long before it revolutionized any other part of the production chain.

Cast ornament shows the same pattern, with some limited mechanization accompanying persistent labor-intensiveness. Cast ornament is made of materials that are originally soft, or that can be made so temporarily through heating or mixing with water. Up to the nineteenth century, the principal materials for cast ornament were clay and plaster, while bronze was the preferred material for cast sculpture. The process of making cast ornament would begin in the same way as that of carved ornament, with drawings and often models. Molds would then be carved in wood or cast from the models in metal, plaster, or gelatine. The mold would then be used to shape the material. There are various ways of doing this, depending on the casting material and the complexity of the ornament.

Some kinds of mold are destroyed in the casting process, but most are reusable many times. And while some casting materials (e.g., bronze) are expensive, others (e.g., clay and plaster) are cheap once the infrastructure for producing them is in place. So once the initial investment in kilns and molds is made, large quantities of cast ornament can be produced at low marginal cost. This means that mass production of ornament has been theoretically possible since very early times.

Despite this, factory production of ornament did not become general practice until the nineteenth century. The reason for this is presumably that markets were so small that these economies of scale could not be realized. Today, much of the best cast ornament in Britain comes from a factory near Northampton run by a company called Haddonstone, whose products I return to below. Haddonstone has customers dispersed fairly evenly across Britain, and it also exports to Ireland, Continental Europe, the Middle East, and the United States. In a premodern economy, with fantastically high transport costs, its market would have been far smaller, perhaps indeed just the town of Northampton – and because premodern societies were extremely poor, Northampton would have been an even smaller market than it is now.  Instead of a potential market of millions of new buildings annually, its potential market could easily have been in single digits. It is highly improbable that the fixed costs of factory production would be worthwhile under these conditions.

The upshot of this is that premodern cast ornament was seldom able to exploit its natural scalability. The cheap cast materials probably always tended to be cheaper than stone carving, but this advantage was not marked, and many premodern societies used carved stone for a wide range of public buildings. In many times and places, wood ornament, which is much easier to carve than stone, was used in common buildings. This suggests it was competitive against plaster and terracotta even at the most budget end of the premodern market for ornament.

In its essentials, the supply-side story is thus true of premodern ornament, even though the romantic idea that every piece of premodern ornament is an original work of art is largely inaccurate. Nearly all premodern ornament was mechanically copied in some way, and some premodern manufacturing methods could in theory have been scaled up to mass production. The claim that modern mechanically produced ornament is distinctively inauthentic or uncreative is highly dubious: mechanical copying has been widespread for many centuries. But premodern copying industries were themselves small-scale and labor intensive, and it is plausible that ornament was only widely used in these societies because labor was so affordable.

Manufacturing ornament in modernity

The supply-side story says that these labor-intensive industries failed to evolve in modernity, and so lost out to competition from industries that did. But the first claim here just isn’t true: in fact, the manufacture of ornament was revolutionized in the nineteenth and twentieth centuries. Three changes are worth drawing out.

First, inventive toolmakers mechanized the carving process. This is only a qualified truth in the case of stone carving. By the early twentieth century, sophisticated planing machines were capable of cutting simple moldings, column shafts, and so forth with little or no manual finishing work. However, more complex ornaments continued to be carved by hand. A planing machine works by gradually sanding down a block, wearing off material through abrasion until the desired profile is left. This means it is good for producing ornaments that consist essentially of a single profile extended in one dimension. But it cannot easily produce ornaments with undercutting (i.e., drooping projections), and it certainly cannot produce complex multidimensional ornaments like Corinthian capitals or Gothic pinnacles.

In fact, stonework is only finally being mechanized today. I recently visited what is probably the world’s most advanced factory for cutting stonework with a computer-controlled machine, Monumental Labs in New York City. Monumental Labs has constructed a robot that scans a model and then carves it from blocks of stone. The robot works about two to four times faster than a stone carver, and of course it works nonstop, meaning that its overall productivity is 6-12 times greater. It is capable of executing about 95 percent of the carving process, even for figure sculpture, where exact precision is particularly important. Unsurprisingly, Monumental Labs is quickly capturing market share from rivals who still do much of the work with pointing machines and hand carving. Over the next few years, they may succeed in finally mechanizing the process of stone carving. But this is only happening in the 2020s, after natural stone carving has undergone a long decline. So with respect to stonework, the supply-side story may have some validity.

In the case of woodwork, however, mechanization was extraordinarily successful. Two key innovations were steam-powered milling machines and lathes in the nineteenth century. A milling machine has spinning cutters shaped like the negative of the desired profile of the molding. When a beam of wood is passed through it, the cutters remove exactly the correct volume of wood, and an essentially finished ornament emerges on the other side, with many hours of manual carving work completed in seconds. A lathe works on a modification of the same principle: the piece of wood is spun, and the blade is held steady. It is used for things like balusters and columns. Lathes, unlike milling machines, had existed before the Industrial Revolution, but steam made them much more powerful.

In Europe, the effect of these advances was obscured by fire safety laws that tended to ban woodwork on the exterior of urban buildings. But such laws were generally absent in the United States, where there was thus an enormous proliferation of ornamental woodwork in the late nineteenth century, a process bound up with the popularity of what Americans call the ‘Queen Anne’ and Eastlake styles. The ban on exterior woodwork was also lifted in England in the 1890s, resulting in a revival of woodwork decoration that is so characteristic of Edwardian houses, and that makes many Edwardian neighborhoods so much more cheerful than their Victorian predecessors. Although these machines could not generate every kind of woodwork (unlike the astonishing computer-controlled machines, known as CNC machines, that have been developed since), their range was much wider than that of the corresponding machines for stone carving.

The second change revolutionizing ornament manufacture was that scientific advances improved the available materials. Improvements in metallurgy dramatically reduced the cost of cast iron in the early nineteenth century, and its use spread rapidly thereafter. New York City even went through a brief phase of making commercial buildings entirely from iron, many of which survive in SoHo. This proved to have practical problems like overheating, but adding cast iron ornament to masonry buildings became common in many places. Some cities, like Sydney and Melbourne, became especially known for their traditions of cast ironwork.

Another important material is cast stone. Cast stone is a kind of concrete, made by crushing stone, mixing the fragments (called aggregate) with a smaller quantity of cement as a binder, and then casting it in a mold. The crushed stone gives it an appearance resembling natural stone, an effect that is often augmented by mechanically tooling or etching the surface. Good cast stone is remarkably plausible: essentially no layperson would notice that it is not ‘real’, and even a specialist may struggle to tell if it is hoisted 80 feet up a facade. Simple molds are usually machine-carved in wood, and complex three-dimensional ones are themselves cast in gelatine or, today, silicone.

Although there were earlier concretes that bore some resemblance to stone, plausible cast stone seems to have emerged only in the last quarter of the nineteenth century. It became widely used in the United States in the early twentieth century, and many key public buildings in American cities made use of it. Because simple shapes had become easy to carve in stone mechanically, architects sometimes faced the bulk of the facade with natural stone and used cast stone only for the ornament.

While researching this article, I visited the factory of the cast stone manufacturer Haddonstone in Northampton. With the help of the classical architect Hugh Petter, Haddonstone has recently constructed molds based on the designs of the eighteenth-century architect James Gibbs. The molds are filled on a conveyor belt, left to dry overnight, and then opened up in minutes. So it is now possible to buy perfectly proportioned classical ornament, nearly indistinguishable from stone, that has – if the molds and the factory infrastructure are treated as a given – taken only minutes of labor to produce. This sort of capacity is only gradually reemerging, stimulated by the revival of classical architecture, but it was once widespread. Haddonstone is currently manufacturing cast stone ornament for Nansledan, the vernacular-style urban extension to Newquay supported by the King.

The third process was the enormous expansion in the available markets, and the economies of scale that this generated. In the nineteenth century the volume of construction increased tremendously, and transport networks were vastly improved.

It is well-known that railways cut travel times a great deal, perhaps by four fifths relative to stagecoaches by the late nineteenth century. But this vastly understates the transport improvements, because stagecoach speeds themselves improved dramatically during the turnpike (toll road) building boom of the previous century, as did freight via canals.

A stagecoach fare between London and Brighton, 47 miles as the crow flies, varied between 276 and 144 pence in the early nineteenth century, equating to a per-mile-traveled cost of perhaps 2 pence. By the 1880s, first-class rail travel in Britain cost an average of about 0.15 pence per mile traveled. This suggests a fall in overland per-tonne per-mile freight costs in the order of 95 percent, for a service that had also grown five times faster. This meant that the markets available to manufacturers located anywhere with railway access grew far larger, favoring those materials like stucco and terracotta whose per-unit costs dropped a lot when they were produced at scale. In the 1930s, just as ornament was starting to decline, transport costs were vertiginously cut again, this time by the development of modern trucking.

Manufacturers naturally took full advantage of this, developing an extensive system of factory production for these methods. For example, the market for architectural terracotta in the United States came to be dominated by just a few huge firms, each of which apparently commanded a near monopoly over thousands of miles. Almost the entire Pacific market was served by a firm called Gladding McBean, whose factory was in Lincoln, California; the Midwest was dominated by a Chicago firm confusingly called the Northwestern Terra Cotta Company; the East Coast was dominated by a New Jersey firm called, more intuitively, the Atlantic Terra Cotta Company. This state of affairs would have been unthinkable just decades earlier, when freight could be carried overland only by carts and pack animals.

A less important but still significant factor was the emergence of extremely large individual buildings. Most early twentieth-century skyscrapers actually had a complete set of ornament modeled for them bespoke, but the buildings were so enormous that substantial economies of scale were still achieved. This is one reason why terracotta was such a popular material for skyscrapers in interwar America, a component of American Art Deco that has now become a striking part of its visual identity.

The democratization of ornament

On the one hand, we have the increasing cost of labor; on the other, we have the fact that less labor was necessary per unit of ornament. Which effect was stronger? For the period from the start of the Industrial Revolution to the First World War, the answer should be obvious to anyone walking the streets of an old European city. The vernacular architecture of the seventeenth or eighteenth century tends to be simple, with complex ornament restricted to the homes of the rich and to public buildings. In the nineteenth-century districts, ornament proliferates: even the tenement blocks of the poor have richly decorated stucco facades.

The revealed evidence is in fact overwhelming that the net effect between, say, 1830 and 1914 was mainly one of greater affordability. To be sure, the ornament of the middle and working classes was of stucco, terracotta, or wood, not stone, and it was cast or milled in stock patterns, not bespoke. These features occasioned much censoriousness and snobbery at the time. But we might also see them as bearing witness to the democratizing power of technology, which brought within reach of the people of Europe forms of beauty that had previously belonged only to those who ruled over them.

What about the period since 1914? Did the economic tide turn against the affordability of ornament? The evidence here is more complex. Over the course of the 1920s and 1930s ornament gradually vanished from the exteriors of many kinds of architecture, though at different rates in different countries and for different types of building. In the decades since, it has seen only limited and evanescent revivals. But we still have good evidence that this change was not really driven by growing unaffordability.

The reason is that there are some relatively budget pockets of the market where ornament has remained pretty common. Virtually any like-for-like comparison of an elite building from 1900 and today will show a huge reduction in ornament. Indefinitely many comparisons are possible, but there is one on the previous page, between a British Government office from the Edwardian period and one from the early 2000s.

We could run the same sort of comparison for any two banks, corporate headquarters, parliaments, concert halls, universities, schools, art galleries, or architect-designed houses, and with occasional exceptions we would find the same pattern. But if we try to run it for mass-market housing, we get a more uncertain result. On the previous page are promotional images for mass-market British houses in the 1930s and today. What is striking is how similar they are. Both have carved brackets, molded bargeboards, faux leaded windows, paneled wooden doors, patterned hung tiles, and decorative brickwork. The modern houses have UPVC windows rather than wooden ones, and they are more likely to have garages. Otherwise, they haven’t really changed. The interiors of the modern homes mostly lack the molded cornices of the 1930s ones, but many of them still have molded skirting boards, fielded door panels, and molded door surrounds.

Browsing the website of any major British housebuilder will confirm that, although the quantity of ornament in mass-market housing probably has declined somewhat since the early 1900s, it has declined much less than that of any other build type. This pattern is even more visible in the United States. But this is exactly the opposite of what the supply-side theory would predict.

The supply-side theory says that ornament declined because it became prohibitively expensive, which suggests that it would vanish from budget housing first and gradually fade from elite building types later. In fact, budget housing is almost the only place we find it clinging on. 

The obvious explanation is that ornament survives in the mass-market housebuilder market because the people buying new-build homes at this price point are less likely to be influenced by elite fashions than are the committees that commission government buildings or corporate headquarters. The explanation, in other words, is a matter of what people demand, not of what the industry is capable of supplying: ornament survives in the housing of the less affluent because they still want it. 

An interesting special case here is the McMansion, the one really profusely ornamented type of housing that still gets built fairly often in some countries. McMansions are built for people who have achieved some level of affluence, but who stubbornly retain a non-elite love of ornamentation. They inspire passionate contempt in many sophisticated critics, to whom they afford a rare opportunity to flex cultural power without looking as though one is being nasty to poor people. McMansions illustrate how easily wealthy people and institutions could ornament their buildings if they wanted to. But, perhaps with that passionate contempt in mind, most of them no longer do.

According to the supply-side theory, the story of ornament in modernity is one of ancient crafts gradually dying out as they became economically obsolete. I have told a different story. In the nineteenth and early twentieth centuries, the production of ornament was revolutionized by technological innovation, and the quantity of labor required to produce ornament declined precipitously. Ornament became much more affordable and its use spread across society. An immense and sophisticated industry developed to manufacture, distribute, and install ornament. The great new cities of the nineteenth century were adorned with it. More ornament was produced than ever before.

We can imagine an alternative history in which demand for ornament remained constant across the twentieth century. Ornament would not have remained unchanged in these conditions. Natural stone would probably have continued to decline, although a revival might be underway as robot carving improved. Initially, natural stone would have been replaced by wood, glass, plaster, terracotta, and cast stone. As the century drew on, new materials like fiberglass and precast concrete might also have become important. Stock patterns would be ubiquitous for speculative housing and generic office buildings, but a good deal of bespoke work would still be done for high-end and public buildings. New suburban housing might not look all that different from how it looks today, but city centers would be unrecognizably altered, fantastically decorative places in which the ancient will to ornament was allied to unprecedented technical power.

This was not how it turned out. In the first half of the twentieth century, Western artistic culture was transformed by a complex family of movements that we call modernism, a trend that extends far beyond architecture into the literature of Joyce and Pound, the painting of Picasso and Matisse, and the music of Schoenberg and Stravinsky. Between the 1920s and the 1950s, modernist approaches to architecture were adopted for virtually all public buildings and many private ones. Most architectural modernists mistrusted ornament and largely excluded it from their designs. The immense and sophisticated industries that had served the architectural aspirations of the nineteenth century withered in full flower. The fascinating and mysterious story of how this happened cannot be told here. But it is a story of cultural choice, not of technological destiny. It was within our collective power to choose differently. It still is.

Read the whole story
sarcozona
1 day ago
reply
Epiphyte City
Share this story
Delete

New Answers for Mars' Methane Mystery - Universe Today

1 Share

Planetary scientists perk up whenever methane is mentioned. Methane is produced by living things on Earth, so it’s considered to be a potential biosignature elsewhere. In recent years, MSL Curiosity detected methane coming from the surface of Gale Crater on Mars. So far, nobody’s successfully explained where it’s coming from.

NASA scientists have some new ideas.

Ever since Curiosity landed on Mars in 2012, it’s been sensing methane. But the methane displays some odd characteristics. It only comes out at night, it fluctuates with the seasons, and sometimes, the amount of methane jumps to 40 times more than the regular level.

The ESA’s ExoMars Trace Gas Orbiter entered a science orbit around Mars in 2018, and scientists fully expected it to detect methane in the planet’s atmosphere. But it didn’t, and it has never been detected elsewhere on Mars’ surface.

If life was producing the methane, it appears to be restricted to the subsurface under Gale Crater.

There’s no convincing evidence that life exists on Mars. It may have in the past, and it’s possible that some extant life clings to a tenuous existence in subsurface brines or something. But we lack evidence, so life is basically ruled out as the methane source. Especially since the evidence shows life would have to be under Gale Crater and nowhere else.

Scientists have been trying to determine the source of methane, but so far, they haven’t come up with a specific answer. It has something to do with subsurface geological processes involving water, most likely.

“It’s a story with a lot of plot twists,” said Ashwin Vasavada, Curiosity’s project scientist at NASA’s Jet Propulsion Laboratory in Southern California, which leads Curiosity’s mission.

Alexander Pavlov is a planetary scientist at NASA’s Goddard Space Flight Center who leads a group of NASA scientists studying the Martian Methane Mystery. In recent research, they suggested that the methane is stored underground. They didn’t explain what produced it, but they showed that methane can be sealed underground by salt solidified in the Martian regolith.

They suggested that the methane could be released from its subsurface reservoir by the weight of the Curiosity rover itself. The rover’s weight could break the salt seal and release methane in puffs. That’s an interesting proposition, but it doesn’t explain the seasonal and diurnal fluctuations. That makes sense since the Gale Crater is one of only two regions where a rover is working. The other is Jezero Crater, where the Perseverance Rover is working, but it doesn’t have a methane detector. (Neither will the ESA’s Rosalind Franklin rover, which is scheduled to land on Mars in 2029.)

The research group addressed those fluctuations by suggesting that seasonal and daily heating could also break the seal and release methane.

Their potential explanations stem from research Pavlov conducted in 2017. He grew bacteria called halophiles, which grow in salty conditions, in simulated Martian permafrost. The simulated soil was infused with salt, replicating conditions on much of Mars. The microbe growth was inconclusive, but the researchers noticed something else. As the salty ice sublimated, a layer of solidified salt remained, forming a crust.

“We didn’t think much of it at the moment,” Pavlov said.

But he remembered it when MSL Curiosity detected an unexplained burst of methane on Mars in 2019.

“That’s when it clicked in my mind,” Pavlov said. Then, he and a team of researchers began testing conditions that could form the hardened salt seals and then break them open.

Perchlorate is a chemical salt that’s widespread on Mars. Pavlov and his fellow researchers recreated different simulated Martian permafrosts with varying amounts of perchlorate. Inside a Mars simulation chamber, they subjected the samples to different temperatures and atmospheric pressures to see if they would form seals.

In their experiments, they used neon as a methane analog and injected it under the soil. Then, they measured the gas pressure below and above the soil. They found that the pressure was higher under the soil, meaning the gas was being trapped by the salty permafrost. Furthermore, they found that seals formed in samples containing as little as 5% or 10% perchlorate, and they formed within 3 to 13 days. Those are compelling results.

While 5-10% perchlorate doesn’t sound like much, it’s actually a higher concentration than in Gale Crater, where the methane has been detected. But perchlorate isn’t the only salt in Martian regolith. It also contains sulphates, another type of salt mineral. Pavlov says he and his team will test sulphates next for their ability to form a seal.

The Martian Methane Mystery is commanding a lot of attention. It’s a juicy mystery, and once it’s solved, our understanding of methane as a biosignature or false positive will be much improved. NASA’s 2022 Planetary Mission Senior Review recommended that the issue of methane production and destruction at Mars be investigated further.

The type of work that Pavlov and his colleagues are doing is important, but it’s being held back. Pavlov says that they need more consistent methane measurements. The problem is that Curiosity’s SAM (Sample Analysis at Mars) instrument, which senses the methane, is busy with other tasks. It only checks for methane a few times per year. It’s mostly occupied with drilling samples and testing them, a critical and time-consuming part of the rover’s mission.

“Methane experiments are resource intensive, so we have to be very strategic when we decide to do them,” said Goddard’s Charles Malespin, SAM’s principal investigator.

Curiosity’s mission wasn’t designed to measure methane fluctuations. In 2017, NASA said its SAM instrument only sampled the atmosphere 10 times in 20 months. That’s a very inconsistent sample that leaves lots of unanswered questions.

Scientists think another mission is needed to advance their understanding of Martian methane. Rather than one sensor taking irregular methane readings from one location, we need multiple testing stations on the surface that regularly monitor the atmosphere. Nothing like it is in the works.

“Some of the methane work will have to be left to future surface spacecraft that are more focused on answering these specific questions,” Vasavada said.

Read the whole story
sarcozona
1 day ago
reply
Epiphyte City
Share this story
Delete
Next Page of Stories