Blog

Between 2009 and 2020, Josh published more than 10,000 blog posts. Here, you can access his blog archives.

2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009

Diagnose What Ails You By Spitting on Your iPhone

December 1st, 2011

If your iPhone can diagnose your medical issues, why not your legal issues?

But Hyun Gyu Park and Byoung Yeon Won at the Korea Advanced Institute for Science and Technology in Daejeon think touchscreens could improve the process by letting your phone replace the lab work. Park suggests the lab-on-a-chip could present a tiny droplet of the sample to be pressed against a phone’s touchscreen for analysis, where an app would work out whether you have food poisoning, strep throat or flu, for example.

SCOTUSBlog Changes Their Community Feature

December 1st, 2011

Last month, in response to SCOTUSBlog’s changes, I wrote:

I love SCOTUSBlog. Their resources are vast and thorough. Frankly, I don’t read much of their new content, because well, there is just too much! I would be curious to compare the traffic for their live broadcast of opinion hand-down days, and maybe Lyle’s summaries of oral arguments that are posted first, with all of their other content–I’m talking about the symposia and special features. I’d wager the former is the vast majority, and the latter barely gets a trickle.

Turns out I was on the right track. Yesterday Tom Goldstein announced a change to their community feature due to lack of interest:

Roughly two months ago, we introduced our new Community feature to provide readers with a forum to discuss issues relating to the Court.  After six weeks, we decided to step back and evaluate how it was working and could be improved.

The original design involved a new topic each day.  Except on a few topics of broad interest (for example, the health care litigation) the great majority of comments were those that we solicited.  We generally received between 800 and 1500 “hits” on these discussions each day.

In general, the quality of the comments was very good.  It was also respectful.  Our principal concern that the discussion would degenerate into classic, nasty Internet fights did not come to pass at all.  The number of hits was reasonable for a new feature.

On the other hand, the breadth of participation was very narrow.  Few readers posted their own comments.

Also, this structure was very resource intensive.  A different member of the blog team would have a topic each week, and generating “seed” comments could be time consuming.

So few people visited this feature, the only people who commented were those who were solicited, and it took a lot of resources to maintain. By way of comparison, at its peak my measly blog was getting over 1,000 hits a day. On some days I hit 10,000 hits a day. That’s about what I thought, and what I observed. With the exception of the liveblogs of opinion hand-downs, and Lyle’s reports of the arguments, in addition to the case files, most of SCOTUSBlog is largely superfluous. The best legal bloggers blog at their blogs for analyses (the roundups capture most of these). But trying to artificially create a site for legal discourse beyond their core competencies has suffered.

My aim on projects is to keep the focus as narrow and linked to my core competencies as possible.

Academia.Edu and “Open Science”

December 1st, 2011

Academia.Edu seems to be a cool new social network for academics. TechCrunch has a nice write up.

I just made a profile. I wonder how this will jive with SSRN?

This seems to be a part of a trend known as Open Science where academics are making their research and data available open source.

Founder Richard Price (whose Academia profile you can check out here) says that aside from getting an increasing amount of traction with researchers, the site is also benefitting from a recent movement among universities and researchers that’s referred to as ‘Open Science‘. If you’ve ever tried looking up scholarly papers online, you’ve likely encountered one of the many paywalls put up by the journals those papers were published in. Access to these papers can be very expensive, depending on the journal — in some cases prohibitively so. In short, the information is fragmented and doesn’t flow freely.

Recently some scientists have begun to combat this by deeming their papers ‘open access’, thereby making them publicly accessible for free. Princeton now requires researchers to get a waiver if they want to assign all copyright to a journal; MIT and Harvard have both enacted open access policies as well. Many researchers believe that this open access will help streamline the research itself, allowing for faster innovation.

Academia.edu benefits from this movement because it means that researchers are free to share papers amongst themselves on the site. Price says that Academia.edu is already the largest platform for sharing these research articles, and the company looks to help foster this trend going forward.

Paul Allen had a piece in WSJ on “Open Science” yesterday:

A crucial aspect to this project—and others the Allen Institute has pursued over the last eight years—is an “open science” research model. Early on, we considered charging commercial users for access to our online data. From a strictly financial standpoint, it made sense to reap front-end fees and, down the line, intellectual property royalties. The revenue could cover the high costs of maintenance and development to keep the resource current and useful.

But our mission was to spark breakthroughs, and we didn’t want to exclude underfunded neuroscientists who just might be the ones to make the next leap. And so we made all of our data free, with no registration required. The Institute would have no gatekeeper. Our terms-of-use agreement is about 10% as long as the one governing iTunes.

Our facility is neither the first nor the last to use a shared database to embrace “open science” and reject the competitive, single-lab R&D paradigm. Traditional research incentives—where journal publications are the coin of the realm—tend to discourage vital sharing.

Most important, we generate data for the purpose of sharing it. Since opening shop in 2003, we’ve had 23 public releases, or about three per year. We don’t wait to analyze our raw data and publish in the literature. We pour it onto the public website as soon as it passes our quality control checks. Our goal is to speed others’ discoveries as much as to springboard our own future research.

Yes I think this is the key. Sharing data *before* publishing it in journals.

This is something that I have tried to implement at the Harlan Institute and at FantasySCOTUS.

 

Constitution 3.0

December 1st, 2011

Jeff Rosen and Ben Wittes have a cool book by that title. A discussion on NPR here:

Rosen describes one privacy scenario, imagined at a conference by Google public policy chief Andrew McLaughlin, in which websites like Google and Facebook could someday potentially post video from live surveillance cameras online — and then archive those videos in a database.

“[McLaughlin said,] ‘It would be theoretically possible to click on a picture of anyone in the world — say me — back click on me to find out where I came from, forward click on me to see where I’m going, and basically have 24/7 ubiquitous surveillance of everyone on the planet at all times,” Rosen says. “This is a GPS case on steroids. … [McLaughlin said], ‘First of all, should Google do this? And would it violate the Constitution?’ And the fact that there was no clear answer to that question … interested me and made me think how inadequate our current Constitutional doctrine is to resolve the most profound privacy cases of our age.”

Rosen says if this scenario — as unlikely as it may seem — were to take place, it would raise legal questions that aren’t covered by the Constitution.

This reminds me of something I wrote in 2007:

As distinguished from previous forms of public monitoring, this new form of surveillance will be omnipresent, as it can record vast areas of space over a very small period of time. It provides the users of this system with omniscience to know everything happening in a specific location at a specific time. Furthermore, this information will be indefinitely retained, and easily accessible. When future versions of this technology is properly implemented, it will be possible to enter a time, date, and location, and witness what happened at that moment as if you were there. It is a virtual time machine.

Ubiquitous surveillance? Omniveillance?

What to do with too much data?

December 1st, 2011

This seems to be the problem with DNA sequencing: too much data!

BGI, based in China, is the world’s largest genomics research institute, with 167 DNA sequencers producing the equivalent of 2,000 human genomes a day.

BGI churns out so much data that it often cannot transmit its results to clients or collaborators over the Internet or other communications lines because that would take weeks. Instead, it sends computer disks containing the data, via FedEx.

“It sounds like an analog solution in a digital age,” conceded Sifei He, the head of cloud computing for BGI, formerly known as the Beijing Genomics Institute. But for now, he said, there is no better way.

The field of genomics is caught in a data deluge. DNA sequencing is becoming faster and cheaper at a pace far outstripping Moore’s law, which describes the rate at which computing gets faster and cheaper.

The result is that the ability to determine DNA sequences is starting to outrun the ability of researchers to store, transmit and especially to analyze the data.

“Data handling is now the bottleneck,” said David Haussler, director of the center for biomolecular science and engineering at the University of California, Santa Cruz. “It costs more to analyze a genome than to sequence a genome.”

Though, one plus of this data deluge is that tech companies are learning how to handle vast quantities of data, something that bodes well for algorithm-based litigation assistance:

But the data challenges are also creating opportunities. There is demand for people trained in bioinformatics, the convergence of biology and computing. Numerous bioinformatics companies, like SoftGenetics, DNAStar, DNAnexus and NextBio, have sprung up to offer software and services to help analyze the data. EMC, a maker of data storage equipment, has found life sciences a fertile market for products that handle large amounts of information. BGI is starting a journal, GigaScience, to publish data-heavy life science papers.

If only Google would solve this problem:

Google might help as well.

“Google has enough capacity to do all of genomics in a day,” said Dr. Schatz of Cold Spring Harbor, who is trying to apply Google’s techniques to genomics data. Prodded by Senator Charles E. Schumer, Democrat of New York, Google is exploring cooperation with Cold Spring Harbor.

Google’s venture capital arm recently invested in DNAnexus, a bioinformatics company. DNAnexus and Google plan to host their own copy of the federal sequence archive that had once looked as if it might be closed.