plant lover, cookie monster, shoe fiend
4470 stories
·
15 followers

Why fancy coffee shops all have the same stools, and other dilemmas of the authenticity era.

2 Shares
Why fancy coffee shops all have the same stools, and other dilemmas of the authenticity era.

↩︎ Jay Owens

View Post →

Read the whole story
sarcozona
3 days ago
reply
rocketo
3 days ago
reply
seattle, wa
Share this story
Delete

Completely Silent Computer.

1 Comment

Completely Silent Computer.

Read the whole story
sarcozona
3 days ago
reply
This is appealing to me, but the noise coming from outside my apartment dwarfs anything in it. I wish cities didn't have cars, or only had electric cars with quiet tires. And that the water feature across the street wasn't so loud that it drowns out my alarm.
Share this story
Delete

The Birth of the New American Aristocracy - The Atlantic

2 Comments

If the secrets of a nation’s soul may be read from its tax code, then our nation must be in love with the children of rich people.

Read the whole story
sarcozona
4 days ago
reply
Yep "when educated people with excellent credentials band together to advance their collective interest, it’s all part of serving the public good by ensuring a high quality of service, establishing fair working conditions, and giving merit its due. That’s why we do it through “associations,” and with the assistance of fellow professionals wearing white shoes. When working-class people do it—through unions—it’s a violation of the sacred principles of the free market. It’s thuggish and anti-modern. Imagine if workers hired consultants and “compensation committees,” consisting of their peers at other companies, to recommend how much they should be paid. The result would be—well, we know what it would be, because that’s what CEOs do."
digdoug
4 days ago
reply
*RAGE*
Louisville, KY
Share this story
Delete

Teaching online and inclusion

1 Share
"Do you expect me to talk, Goldfinger?" "No Mr. Bond, I expect you to make this online course ADA compliant!"

 I’ve been teaching a completely online class this semester. I’ve done partly online classes, and practically live online anyway, so I thought this would be a fairly simple thing for me to do.

It has not. It has been a real eye-opener for thinking about student needs.

One of the biggest challenges I’ve been working with is making the class compliant with the rules for students with disabilities. The rules are that whether there are students in the class who have declared disabilities or not, you must make every item in the class as readily available and accessible as if there were students with disabilities.

This means video lectures need closed captioning. There is voice recognition software that does closed captioning automatically, which is great, but it never does it perfectly. Every time I say, “Doctor Zen,” the software puts in, “doctors in.” This means you have to go in, listen to the entire lecture, and proofread the captioning for entire lecture.

Similarly, every image needs a description so that someone who is blind or otherwise visually impaired can understand the material. And many scientific diagrams are complex and challenging. Today, I was forced with trying to write a complete description of this:

Human genome influences traits. Human genome has 2 copies in every cell. 1 copy is made of 3 billion base pairs. Cell makes up tissue. In cell, genome divided into nuclear genome and mitochondrial genome. Cells manifest traits. Tissues make up organs. Tissues manifest traits. Organs make up body. Body manifests traits. Traits leads back to Lesson 1. Mitochondrial genome has 1 circular chromosome. Mitochondrial genome is many per cell. Circular chromosome is many per cell. Circular chromosome made of nucleic acid and histone proteins. Nuclear genome is one per cell. Nuclear genome is 23 pairs of linear chromosomes. 23 pairs of linear chromosomes has 22 pairs called autosomes. 23 pairs of linear chromosomes has 1 pair called sex chromosomes. Sex chromosomes are XX for female. Sex chromosomes are XY for male. 23 pairs of linear chromosomes are made of nucleic acid and histone proteins. Nucleic acid wraps around histone proteins. Nucleic acid has two types, DNA and RNA. RNA leads to lesson 3. DNA is composed on deoxynucleotides. DNA is double stranded. DNA composed of deoxynucleotides. Double stranded leads to helical shape. Double stranded by base pairs. Deoxynucleotides are 4 types of nitrogenous bases. Nitrogenous bases can form base pairs. Nitrogenous base connects to A, T, C, G. A base pairs with T and vice versa. G base pairs with C and vice versa.

Here’s what I came up with for the concept map above:

Human genome influences traits. Human genome has 2 copies in every cell. 1 copy is made of 3 billion base pairs. Cell makes up tissue. In cell, genome divided into nuclear genome and mitochondrial genome. Cells manifest traits. Tissues make up organs. Tissues manifest traits. Organs make up body. Body manifests traits. Traits leads back to Lesson 1. Mitochondrial genome has 1 circular chromosome. Mitochondrial genome is many per cell. Circular chromosome is many per cell. Circular chromosome made of nucleic acid and histone proteins. Nuclear genome is one per cell. Nuclear genome is 23 pairs of linear chromosomes. 23 pairs of linear chromosomes has 22 pairs called autosomes. 23 pairs of linear chromosomes has 1 pair called sex chromosomes. Sex chromosomes are XX for female. Sex chromosomes are XY for male. 23 pairs of linear chromosomes are made of nucleic acid and histone proteins. Nucleic acid wraps around histone proteins. Nucleic acid has two types, DNA and RNA. RNA leads to lesson 3. DNA is composed on deoxynucleotides. DNA is double stranded. DNA composed of deoxynucleotides. Double stranded leads to helical shape. Double stranded by base pairs. Deoxynucleotides are 4 types of nitrogenous bases. Nitrogenous bases can form base pairs. Nitrogenous base connects to A, T, C, G. A base pairs with T and vice versa. G base pairs with C and vice versa.

Writing that description... took time.

Anyone who think that online teaching is going to be some sort of big time saver that will allow instructors to reach a lot more students has never prepared an online class. It’s long. It’s hard. It’s often bordering on tortuous (hence the “No Mr. Bond” gag at the top of the post).

These things take time, but I don’t begrudge the time spent. It’s the right thing to do. It’s forced me to think more deeply about how I can provide more resources that are more helpful to more students. It’s not just deaf students who can benefit from closed captions, for instance. Someone who can hear could benefit from seeing words spelled out, or maybe use them when they are listening in a noisy environment, or one where sound would be distracting.

And I keep thinking that if I think it takes a lot of work to put these it, it’s nothing compared to students who need these materials who have to navigate through courses every day.

External links

Flowcharts and concept maps
Read the whole story
sarcozona
4 days ago
reply
Share this story
Delete

What does "mixed-income" mean for our towns?

1 Comment

What does mixed-income housing mean? To some, it means the next generation of public housing, an incredible improvement over the disastrous projects of the early 20th century. To others, it is just a less obvious way to discriminate against people with low incomes. The complicated reputation of this type of housing has fueled debate between policymakers and residents for decades. Needless to say, the effectiveness of mixed-income housing as we know it is far from a settled issue. But through its successes and failures, mixed-income housing offers some valuable ideas for ensuring equitable and sustainable neighborhoods. And it may offer a glimpse at a promising future for our towns.

 A public housing project in Rochester, NY

A public housing project in Rochester, NY

The Beginning of Public Housing

To understand the relevance of mixed-income housing to our cities today, it’s important to know where it came from. In the early 20th century, U.S. housing policy introduced the government to the real estate business. Early iterations of public housing were meant as a solution to the problems of tenements and slums that housed most low-income residents. Slumlords had supposedly taken advantage of the poorest class of people by providing cramped, unsanitary, and unsightly living conditions in privately-operated tenements. The federal government intervened by expanding the housing division of the Public Works Administration and passing subsequent housing acts which marked the provision of housing as a responsibility of the government.

However, within a few decades, public housing facilities were plagued with the same mismanagement, unsanitary living conditions, and lack of maintenance that were used to justify the government’s initial venture into housing (See: Pruitt-Igoe, Robert Taylor Homes). The dramatic failure of federally-funded housing projects famously drew the ire of urbanist Jane Jacobs who described the housing as “[l]ow-income projects that become worse centers of delinquency, vandalism, and general social hopelessness than the slums they were supposed to replace.” Another key difference was that this time, the disastrous housing came at the expense of the taxpayer.

The Arrival of Mixed-Income Housing

Towards the end of the 20th century, mixed-income housing emerged as an alternative to the projects and has gained traction in housing policy ever since. Mixed-income facilities allow the government to exit the business of development and property management altogether. Under a mixed-income system, public housing agencies grant density bonuses and financing incentives to private developers in exchange for including units with rents held below the market rate to people with qualifying incomes. The practice is a favorite in New Jersey where municipalities have a constitutional obligation to permit the construction of units affordable to a range of incomes under the Mount Laurel Doctrine.

 A mixed-income development in Morristown, New Jersey

A mixed-income development in Morristown, New Jersey

One of the primary benefits of the integration of subsidized and market-rate housing is that low-income residents typically enjoy higher-quality construction and amenities than what is offered in traditional public housing. Newer developments like Essex Crossing in New York, or The Union on Queen near Washington D.C. offer a quality of subsidized housing that would have been unimaginable just a few short decades ago when public housing lacked so much as air conditioning or functional heating systems.

Beyond the improved facilities, mixed-income developments give lower-income residents the opportunity to live in higher-income neighborhoods. When the government was constructing entire facilities dedicated to low-income residents, political and financial realities typically ensured the projects were located far from the wealthiest and safest neighborhoods. Mixed-income facilities help soften the negative perceptions of subsidized housing by removing obvious geographic and visual indicators of class. In doing so, they become more politically palatable to wealthier neighborhoods — with good schools and lower crime rates — that may otherwise oppose subsidized housing.

Emerging Limitations

Mixed-income housing is not without flaw. For one, the physical integration of subsidized and market-rate units has not erased the differences in quality of life between economic classes. Low-income residents may not always have access to the same amenities (e.g. fitness centers, community rooms, etc.) as the market-rate renters. Some facilities place more restrictions on residents of only the subsidized units such as frequent room inspections and regulated visitor access. What low-income renters gain in quality of living arrangements, they often sacrifice in privacy and autonomy.

 Offering a dynamic mix of apartments, townhouses, and single-family homes, this Houston neighborhood has housing options at a wide variety of price points.

Offering a dynamic mix of apartments, townhouses, and single-family homes, this Houston neighborhood has housing options at a wide variety of price points.

Most importantly, mixed-income housing is flawed because the system falls short of addressing the root of the problem it is intended to solve: lack of affordable housing. It’s true that in places like New Jersey, laws that mandate subsidized housing have prompted the construction of hundreds of thousands of housing units, including subsidized and market-rate, that would have otherwise been blocked at the municipal level.

But forcing the price of some units below market rates simply drives up the cost of housing for non-qualifying residents, which naturally expands the pool of people seeking the subsidies in the first place. That isn’t to say the government shouldn’t provide any sort of safety net for those in need of housing. But taken to the extreme, achieving mixed-income neighborhoods through subsidization is an inherently unsustainable practice; it is an intermediate step towards affordability rather than a goal in and of itself.

A Different Type of Mixed-Income Neighborhood

Until this point, we have discussed mixed-income housing as a combination of market-rate and subsidized units. But that is not the only way to achieve mixed-income neighborhoods. With the right land use regulations, neighborhoods can naturally offer housing at a variety of prices within reach of most incomes without any need for subsidization.

Neighborhoods that offer housing to a range of incomes provide a variety of benefits over subsidized mixed-income neighborhoods. Whereas subsidized mixed-income units require wealth redistribution for viability, naturally affordable units expand the tax base as more people can support themselves without public aid. Naturally mixed-income neighborhoods also forgo the need for paternalistic policies that target low-income residents such as the ban on porch grills, visitor restrictions, or the lack of access to facility amenities. When housing is affordable to all, individuals retain their autonomy and dignity as residents.

Fortunately, we can find examples of non-subsidized mixed-income neighborhoods in pockets all over the country. Sometimes natural mixed-income housing occurs due to a lack of pressure on the housing market. For instance, it’s not uncommon to find areas catering to a wide variety of incomes in older neighborhoods of post-industrial cities like Trenton, Cleveland, Pittsburgh, or Buffalo.

 In this neighborhood a mere 1.5 miles from the economic opportunities of downtown Houston, a modest one-bedroom apartment goes for $799 just one block from a luxury three-bedroom apartment going for $3,800. In a place like northern New Jersey, those prices would only be possible with considerable subsidies.

In this neighborhood a mere 1.5 miles from the economic opportunities of downtown Houston, a modest one-bedroom apartment goes for $799 just one block from a luxury three-bedroom apartment going for $3,800. In a place like northern New Jersey, those prices would only be possible with considerable subsidies.

In cities with explosive economic growth, such neighborhoods are harder to come by as zoning regulations often cripple the ability of housing supply to keep up with demand. However, Houston offers a good example of how a city bursting with economic opportunity can maintain neighborhoods accessible to a wide range of incomes.

While certainly not without regulation, Houston is far more developer-friendly than most other cities in the country, allowing developers to employ a wide range of housing types and densities to compete for residents. Residents benefit from this competition as rents remain relatively affordable and stable across incomes despite the massive influx of jobs and residents.

After decades of policy favoring economic polarization, mixed-income housing is a step in the right direction for building economically inclusive and sustainable neighborhoods. However, achieving mixed-income through subsidies is not enough to address the ongoing issue of housing affordability.

Subsidized housing policy must be matched with sensible land use regulations that allow neighborhoods to be responsive to the needs of people of all incomes. Only then will people of all incomes have equal access to the wide range of economic opportunities offered by our towns and cities.

(Top photo by Johnny Sanphillippo)



About the Author

Austin Maitland is a Rochester, NY native and graduate student in urban planning at Rutgers University.



Read the whole story
sarcozona
4 days ago
reply
It's not fair to mention Houston without describing how costs have been shifted to higher transportation costs and unrestricted maintenance costs as city sprawls, plus the absolutely criminal environmental and industrial hazards people are subjected to because of the lax zoning.
Share this story
Delete

You better check yo self before you wreck yo self

1 Share

We (Sean Talts, Michael Betancourt, Me, Aki, and Andrew) just uploaded a paper (code available here) that outlines a framework for verifying that an algorithm for computing a posterior distribution has been implemented correctly. It is easy to use, straightforward to implement, and ready to be implemented as part of a Bayesian workflow.

This type of testing should be required in order to publish a new (or improved) algorithm that claims to compute a posterior distribution. It’s time to get serious about only publishing things that actually work!

You Oughta Know

Before I go into our method, let’s have a brief review of some things that are not sufficient to demonstrate that an algorithm for computing a posterior distribution actually works.

  • Theoretical results that are anything less than demonstrably tight upper and lower bounds* that work in finite-sample situations.
  • Comparison with a long run from another algorithm unless that algorithm has stronger guarantees than “we ran it for a long time”. (Even when the long-running algorithm is guaranteed to work, there is nothing generalizable here. This can only ever show the algorithm works on a specific data set.)
  • Recovery of parameters from simulated data (this literally checks nothing)
  • Running the algorithm on real data. (Again, this checks literally nothing.)
  • Running the algorithm and plotting traceplots, autocorrelation, etc etc etc
  • Computing the Gelman-Rubin R-hat statistic. (Even using multiple chains initialized at diverse points, this only checks if the Markov Chain has converged. It does not check that it’s converged to the correct thing)

I could go on and on and on.

The method that we are proposing does actually do a pretty good job at checking if an approximate posterior is similar to the correct one. It isn’t magic. It can’t guarantee that a method will work for any data set.

What it can do is make sure that for a given model specification, one dimensional posterior quantities of interest will be correct on average. Here, “on average” means that we average over data simulated from the model. This means that rather than just check the algorithm once when it’s proposed, we need to check the algorithm every time it’s used for a new type of problem. This places algorithm checking within the context of Bayesian Workflow.

This isn’t as weird as it seems. One of the things that we always need to check is that we are actually running the correct model. Programming errors happen to everyone and this procedure will help catch them.

Moreover, if you’re doing something sufficiently difficult, it can happen that even something as stable as Stan will quietly fail to get the correct result. The Stan developers have put a lot of work into trying to avoid these quiet cases of failure (Betancourt’s idea to monitor divergences really helped here!), but there is no way to user-proof software. The Simulation-Based Calibration procedure that we outline in the paper (and below) is another safety check that we can use to help us be confident that our inference is actually working as expected.

(* I will also take asymptotic bounds and sensitive finite sample heuristics because I’m not that greedy. But if I can’t run my problem, check the heuristic, and then be confident that if someone died because of my inference, it would have nothing to do with the computaition of the posterior, then it’s not enough.)

Don’t call it a comeback, I’ve been here for years

One of the weird things that I have noticed over the years is that it’s often necessary to re-visit good papers from the past so they reflect our new understanding of how statistics works.  In this case, we re-visited an excellent idea  Samantha Cook, Andrew, and Don Rubin proposed in 2006.

Cook, Gelman, and Rubin proposed a method for assessing output from software for computing posterior distributions by noting a simple fact:

If \theta^* \sim p(\theta) and y^* \sim p(y \mid \theta^*), then the posterior quantile \Pr(h(\theta^*)<h(\theta)\mid y^*) is uniformly distributed  (the randomness is in y^*) for any continuous function h(\cdot).

There’s a slight problem with this result.  It’s not actually applicable for sample-based inference! It only holds if, at every point, all the distributions are continuous and all of the quantiles are computed exactly.

In particular, if you compute the quantile \Pr(h(\theta^*)<h(\theta)\mid y^*) using a bag of samples drawn from an MCMC algorithm, this result will not hold.

This makes it hard to use the original method in practice. That might be down-weighting the problem. This whole project happened because we wanted to run Cook, Gelman and Rubin’s procedure to compare some Stan and BUGS models. And we just kept running into problems. Even when we ran it on models that we knew worked, we were getting bad results.

So we (Sean, Michael, Aki, Andrew, and I) went through and tried to re-imagine their method as something that is more broadly applicable.

When in doubt, rank something

The key difference between our paper and Cook, Gelman, and Rubin is that we have avoided their mathematical pitfalls by re-casting their main theoretical result to something a bit more robust. In particular, we base our method around the following result.

Let \theta^* \sim p(\theta) and y^* \sim p(y \mid \theta^*), and \theta_1,\ldots,\theta_L be independent draws from the posterior distribution p(\theta\mid y^*). Then the rank of h(\theta^*) in the bag of samples h(\theta_1),\ldots,h(\theta_L) has a discrete uniform distribution on [0,L].

This result is true for both discrete and continuous distributions. On the other hand, we now have freedom to choose L. As a rule, the larger L, the more sensitive this procedure will be. On the other hand, a larger L will require more simulated data sets in order to be able to assess if the observed ranks deviate from a discrete-uniform distribution.   In the paper, we chose L=100 samples for each posterior.

The hills have eyes

But, more importantly, the hills have autocorrelation. If a posterior has been computed using an MCMC method, the bag of samples that are produced will likely have non-trivial autocorrelation. This autocorrelation will cause the rank histogram to deviate from uniformity in a specific way. In particular, it will lead to spikes in the histogram at zero and/or one.

To avoid this, we recommend thinning the sample to remove most of the autocorrelation.  In our experiments, we found that thinning by effective sample size was sufficient to remove the artifacts, even though this is not theoretically guaranteed to remove the autocorrelation.  We also considered using some more theoretically motivated methods, such as thinning based on Geyer’s initial positive sequences, but we found that these thinning rules were too conservative and this more aggressive thinning did not lead to better rank histograms than the simple effective sample size-based thinning.

Simulation based calibration

After putting all of this together, we get the simulation based calibration (SBC) algorithm.  The below version is for correlated samples. (There is a version in the paper for independent samples).

The simple idea is that each of the N simulated datasets, you generate a bag of L approximately independent samples from the approximate posterior. (You can do this however you want!) You then compute the rank of the true parameter (that was used in the simulation of the data set) within the bag of samples.  So you need to compute N true parameters, each of which is used to compute one data set, which is used to compute L samples from its posterior.

So. Validating code with SBC is obviously expensive. It requires a whole load of runs to make it work. The up side is that all of this runs in parallel on a cluster, so if your code is reliable, it is actually quite straightforward to run.

The place where we ran into some problems was trying to validate MCMC code that we knew didn’t work. In this case, the autocorrelation on the chain was so strong that it wasn’t reasonable to thin the chain to get 100 samples. This is an important point: if your method fails some basic checks, then it’s going to fail SBC. There’s no point wasting your time.

The main benefit of SBC is in validating MCMC methods that appear to work, or validating fast approximate algorithms like INLA (which works) or ADVI (which is a more mixed bag).

This method also has another interesting application: evaluating approximate models. For example, if you replace an intractable likelihood with a cheap approximation (such as a composite likelihood or a pseudolikelihood), SBC can give some idea of the errors that this approximation has pushed into the posterior. The procedure remains the same: simulate parameters from the prior, simulate data from the correct model, and then generate a bag of approximately uncorrelated samples from corresponding posterior using the approximate model. While this procedure cannot assess the quality of the approximation in the presence of model error, it will still be quite informative.

When You’re Smiling (The Whole World Smiles With You)

One of the most useful parts of the SBC procedure is that it is inherently visual. This makes it fairly straightforward to work out how your algorithm is wrong.  The one-dimensional rank histograms have four characteristic non-uniform shapes: “smiley”, “frowny”, “a step to the left”, “a jump to the right”, which are all interpretable.

  • Histogram has a smile: The posteriors are narrower than they should be. (We see too many low and high ranks)
  • Histogram has a frown: The posteriors are wider than they should be. (We don’t see enough low and high ranks)
  • Histogram slopes from left to right: The posteriors are biased upwards. (The true value is too often in the lower ranks of the sample)
  • Histogram slopes from right to left: The posteriors are biased downwards. (The opposite)

These histograms seem to be sensitive enough to indicate when an algorithm doesn’t work. In particular, we’ve observed that when the algorithm fails, these histograms are typically quite far from uniform. A key thing that we’ve had to assume, however, is that the bag of samples drawn from the computed posterior is approximately independent. Autocorrelation can cause spurious spikes at zero and/or one.

These interpretations are inspired by the literature on calibrating probabilistic forecasts. (Follow that link for a really detailed review and a lot of references).  There are also some multivariate extensions to these ideas, although we have not examined them here.

The post You better check yo self before you wreck yo self appeared first on Statistical Modeling, Causal Inference, and Social Science.

Read the whole story
sarcozona
4 days ago
reply
Share this story
Delete
Next Page of Stories