plant lover, cookie monster, shoe fiend
3032 stories
·
14 followers

New Embroidered Clothes and Portraits by Lisa Smirnova

1 Share

Moscow-based embroidery artist Lisa Smirnova (previously here and here) continues to stitch beautifully rich illustrations of people, capturing the subtle details of eyes, hair, and shadows, thread by thread. Smirnova brings an almost painterly quality to her embroideries that are each infused with bright splashes of color and occasional patchworks of materials. Collected here are a number of pieces from the last year or so, but you can find additional recent projects on Behance.

Read the whole story
sarcozona
4 days ago
reply
Share this story
Delete

Ronin Featured in Nature Article on Independent Scholarship

1 Share

Out today is a nice article in the Careers section of Nature called “Flexible Working: Solo Scientist.” It features the Ronin Institute prominently, and includes quotes from an interview with me, as well as Research Scholars Jeff Rose, Gene Bunin, and Vicenta Salvador. Also prominently featured is one of Gordon Webster’s excellent photographs from November’s unconference. Enjoy!

Read the whole story
sarcozona
4 days ago
reply
Share this story
Delete

Water Women: Swimming Figures Dip in and out of Water by Sonia Alins

1 Share

In this collection of illustrations titled Dones d’aigua (water women), Spanish artist and illustrator Sonia Alins depicts several women immersed almost completely underwater, just a head or foot poking out from the uncertain depths of cloudy liquid. A haunting tension emerges not only from the clever split view created by utilizing translucent paper to mimic water, but also from the slightly ambiguous situation of the figures. It’s not always immediately clear if the women are swimming or drowning. You can follow more of Alins’ work on Instagram or Behance, and a few of her pieces are available as prints. (via Supersonic)

Read the whole story
sarcozona
4 days ago
reply
Share this story
Delete

Convergent adaptation to dangerous prey proceeds through the same first-step mutation in the garter snake Thamnophis sirtalis

1 Share

Abstract

Convergent phenotypes often result from similar underlying genetics, but recent work suggests convergence may also occur in the historical order of substitutions en route to an adaptive outcome. We characterized convergence in the mutational steps to two independent outcomes of tetrodotoxin (TTX) resistance in separate geographic lineages of the common garter snake (Thamnophis sirtalis) that coevolved with toxic newts. Resistance is largely conferred by amino acid changes in the skeletal muscle sodium channel (NaV1.4) that interfere with TTX-binding. We sampled variation in NaV1.4 throughout western North America and found clear evidence that TTX-resistant changes in both lineages began with the same isoleucine-valine mutation (I1561V) within the outer pore of NaV1.4. Other point mutations in the pore, shown to confer much greater resistance, accumulate later in the evolutionary progression and always occur together with the initial I1561V change. A gene tree of NaV1.4 suggests the I1561V mutations in each lineage are not identical-by-decent, but rather they arose independently. Convergence in the evolution of channel resistance is likely the result of shared biases in the two lineages of Th. sirtalis – only a few mutational routes can confer TTX resistance while maintaining the conserved function of voltage-gated sodium channels.

This article is protected by copyright. All rights reserved

Read the whole story
sarcozona
4 days ago
reply
Share this story
Delete

Why Is Cancer More Common in Men?

1 Share

Erin O'Donnell in Harvard Magazine:

MA17_Page_013_Image_0012sm-1Oncologists know that men are more prone to cancer than women; one in two men will develop some form of the disease in a lifetime, compared with one in three women.ef="http://adserver.adtechus.com/adlink/3.0/5466.1/3284406/0/0/ADTECH;loc=300;key=Research+Science++++" target="_blank">But until recently, scientists have been unable to pinpoint why. In the past, they theorized that men were more likely than women to encounter carcinogens through factors such as cigarette smoking and factory work. Yet the ratio of men with cancer to women with cancer remained largely unchanged across time, even as women began to smoke and enter the workforce in greater numbers. Pediatric cancer specialists also noted a similar “male bias to cancer” among babies and very young children with leukemia. “It’s not simply exposures over a lifetime,” explains Andrew Lane, assistant professor of medicine and a researcher at the Dana-Farber Cancer Institute. “It’s something intrinsic in the male and female system.” Now, discoveries by Lane and the Broad Institute of Harvard and MIT reveal that genetic differences between males and females may account for some of the imbalance. A physician-researcher who studies the genetics of leukemia and potential treatments, Lane says that he and others noted that men with certain types of leukemia often possess mutations on genes located on the X chromosome. These mutations damage tumor-suppressor genes, which normally halt the rampant cell division that triggers cancer.

Lane initially reasoned that females, who have two X chromosomes, would be less prone to these cancers because they have two copies of each tumor suppressor gene. In contrast, men have an X and a Y chromosome—or just one copy of the protective genes, which could be “taken out” by mutation. But the problem with that hypothesis, Lane says, was a “fascinating phenomenon from basic undergraduate biology called X-inactivation.” In a female embryo, he explains, cells randomly inactivate one of the two X chromosomes. “When a female cell divides, it remembers which X chromosome is shut down, and it keeps it shut down for all of its progeny.” If female cells have only one X chromosome working at a time, then they should be just as likely as male cells to experience cancer-causing gene mutations. So Lane and his team dug deeper into existing studies and encountered a little-known and surprising finding: “There are about 800 genes on the X chromosome,” he says, “and for reasons that are still unclear, about 50 genes on that inactive X chromosome stay on.” In a “big Aha! moment,” Lane’s group realized that those gene mutations common in men with leukemia were located on genes that continue to function on women’s inactive chromosome. The researchers dubbed those genes EXITS for “Escape from X-Inactivation Tumor Suppressors.” Women, Lane explains, thus have some relative protection against cancer cells becoming cancer because they, unlike men, do have two copies of these tumor-suppressor genes functioning at all times.

More here.

Read the whole story
sarcozona
4 days ago
reply
Share this story
Delete

How far can the logic of shrinkage estimators be pushed? (Or, when should you compare apples and oranges?)

1 Share

Scientists—and indeed scholars in any field—often have to choose how wide a net to cast when attempting to define a concept, estimate some quantity of interest, or evaluate some hypothesis. Is it useful to define “ecosystem engineering” broadly so as to include any and all effects of living organisms on their physical environments, or does that amount to comparing apples and oranges?* Should your meta-analysis of [ecological topic] include or exclude studies of human-impacted sites? Can microcosms and mesocosms be compared to natural systems (e.g., Smith et al. 2005), or are they too artificial? As a non-ecological example that I and probably many of you are worrying about these days, are there any good historical precedents for Donald Trump outside the US or in US history, or is he sui generis? In all these cases and others, there’s no clear-cut, obvious division between relevent information and irrelevant information, things that should be lumped together and things that shouldn’t be. Rather, there’s a fuzzy line, or a continuum. What do you do about that? Are there any general rules of thumb?

I have some scattered thoughts on this, inspired by the concept of “shrinkage” estimates in statistics:

  • If you’re not familiar with the concept of “shrinkage” or the intuitions behind it, I highly recommend reading Efron (1977; link fixed), which is one the best explainers I’ve ever read on anything. You really should click through and then come back, but if you insist on reading on, here’s a summary: Imagine that you’re estimating a bunch of independent or nearly-independent means. Efron uses the example of estimating the true batting averages of each of a bunch of baseball players, based on their observed batting averages in a small sample of games. But you could also think of, say, estimaing the true expression levels of a bunch of different genes. Your best estimates, in the sense of lowest total squared error, are provided not by the sample means themselves, but by “shrinking” all the sample means towards the grand mean. The optimal amount of shrinkage depends on (i) the precision with which the sample means are estimated (less precise estimates get shrunk towards the grand mean more), (ii) how close the sample means are to one another (you shrink them less if they’re more spread out around the grand mean), and (iii) how many means you’re estimating (more means = more shrinkage). The intuition is that, if you have a bunch of sample means, just by chance some of them will happen to be overestimates of the corresponding true means, others will happen to be underestimates. The more means you have, the greater the chances some of them will be extreme over- or under-estimates. So you can reduce your overall error by biasing all of your estimated means towards the grand mean. That slight increase in bias buys you a big reduction in variance. Another way to put it is to say that each of the means gives you some information about what the true values of the other means are. You’re throwing that information away if you just use the unshrunken sample means as your estimates of the true means. You’re casting too narrow an evidentiary net. This is the intuition behind empirical Bayes methods, and it’s closely related to the intuition behind Bayesian methods involving prior information.
  • A similar intuition arises in the context of regression, something I also learned from a Brad Efron paper. You can think of regression as using “indirect” as well as “direct” information to estimate the mean of the dependent variable Y conditional on the value of the predictor variable X. Your best estimate of the conditional mean of Y for any given value of X isn’t the mean of the observations of Y at that value of X (the “direct” information provided by the data). Indeed, you might not even have any observations of Y at that value of X! Rather, your best estimate of the conditional mean of Y for any given value of X depends on the estimated parameters describing the regression of the mean of Y on X (estimated slope and intercept for linear regression). And those estimated parameters of course depend on all In other words, observations of Y for any given value of X give you indirect information about the true mean of Y at other values of X.
  • A similar intuition underpins hierarchical modeling, the statistical estimation of different, hiearchically-nested sources of variation. For instance, here’s a nice intuitive example from baseball, a sport in which players’ unknown true batting averages vary among players who play the same position, and among positions (e.g., pitchers tend to be terrible hitters, whereas first basemen tend to be excellent hitters). You can minimize your total error by shrinking your estimates of each player’s hitting skill towards the mean for their position, and shrinking your estimated means for each position towards the grand mean. To do otherwise is to discard relevant information. That may seem puzzling: why should the batting average of any one player affect your estimate of the average of some other player, especially one who plays a different position? The answer is that every player’s batting average provides some information about the overall “batting environment” all players experience. They all play under the same rules, they all face the same pitchers, and so on.
  • However, it’s not always the case that estimating more means always provides more information about all of them. That’s for at least a couple of reasons. First, estimating more means might well require estimating each one with less precision, for instance because total sampling effort is finite. Second, and less obviously, reducing the total error associated with estimating a bunch of means is not the same as reducing the error with which any particular mean is estimated. As Efron (1977) shows, if you’re estimating a bunch of means, and you expand your dataset to include one or more genuninely atypical means (for a certain precise technical sense of “atypical”), then you can actually increase the error with which you estimate both the atypical means, and the typical ones.
  • One can address the issue of “atypicality” by estimating how heterogeneous one’s collection of means is. Meta-analysts often do this. It’s how they address concerns about what studies to include or exclude from the meta-analysis. Correct me if I’m wrong, but I believe the usual advice for meta-analysts is to err on the side of including a wider range of studies rather than a narrower range, and then estimate whether the means of those studies are in fact heterogeneous (i.e. do they differ more than would be expected by sampling error alone). So for instance, if you’re not sure if studies from human-impacted ecosystems should be included in your ecological meta-analysis, your best bet is to include them and then test whether they’re really different from studies of non-impacted systems. That ensures you’re not unwittingly throwing away information.
  • What I’m struggling with is how far these intuitions can be pushed. Can they be pushed beyond their original, strictly-statistical context? Do the intuitions in the previous bullets provide a general argument that one should always cast a wide evidentiary net?
  • For instance, do these intuitions provide a knock-down argument for the relevance of microcosms and mesocosms to ecological research? Even if you only care about estimating what’s going on in nature, is there a strong prima facie case that you’re Doing It Wrong if you ignore information from microcosms and mesocosms on the grounds that they’re “unrealistic” or “different”? As long as microcosms have something in common with nature, isn’t that a prima facie argument for their relevance?
  • I don’t think these intuitions can be pushed completely beyond their original statistical context without losing all force. For instance, I don’t think they tell you much about whether a broad concept like “ecosystem engineering” is scientifically useful or not. I think that depends on much more than the statistical precision with which one can estimate the effects of ecosystem engineers in a meta-analytic context.
  • All this is connected to my old post on when one should focus on the average vs. on the variation around the average.

*Footnote for smart alecks who think that it’s fine to lump together apples and oranges if one’s question is about fruit: for “apples and oranges” read “apples, oranges, bricks, and Major League Baseball”.


Filed under: New ideas

Read the whole story
sarcozona
4 days ago
reply
Share this story
Delete
Next Page of Stories