Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

An Open Letter to Hobbyists

$
0
0

Comments:"An Open Letter to Hobbyists"

URL:http://cryptnet.net/mirrors/texts/gates1976.html


AN OPEN LETTER TO HOBBYISTS February 3, 1976By William Henry Gates III
An Open Letter to Hobbyists

To me, the most critical thing in the hobby market right now is the lack of good software courses, books and software itself. Without good software and an owner who understands programming, a hobby computer is wasted. Will quality software be written for the hobby market?

Almost a year ago, Paul Allen and myself, expecting the hobby market to expand, hired Monte Davidoff and developed Altair BASIC. Though the initial work took only two months, the three of us have spent most of the last year documenting, improving and adding features to BASIC. Now we have 4K, 8K, EXTENDED, ROM and DISK BASIC. The value of the computer time we have used exceeds $40,000.

The feedback we have gotten from the hundreds of people who say they are using BASIC has all been positive. Two surprising things are apparent, however, 1) Most of these "users" never bought BASIC (less thank 10% of all Altair owners have bought BASIC), and 2) The amount of royalties we have received from sales to hobbyists makes the time spent on Altair BASIC worth less than $2 an hour.

Why is this? As the majority of hobbyists must be aware, most of you steal your software. Hardware must be paid for, but software is something to share. Who cares if the people who worked on it get paid?

Is this fair? One thing you don't do by stealing software is get back at MITS for some problem you may have had. MITS doesn't make money selling software. The royalty paid to us, the manual, the tape and the overhead make it a break-even operation. One thing you do do is prevent good software from being written. Who can afford to do professional work for nothing? What hobbyist can put 3-man years into programming, finding all bugs, documenting his product and distribute for free? The fact is, no one besides us has invested a lot of money in hobby software. We have written 6800 BASIC, and are writing 8080 APL and 6800 APL, but there is very little incentive to make this software available to hobbyists. Most directly, the thing you do is theft.

What about the guys who re-sell Altair BASIC, aren't they making money on hobby software? Yes, but those who have been reported to us may lose in the end. They are the ones who give hobbyists a bad name, and should be kicked out of any club meeting they show up at.

I would appreciate letters from any one who wants to pay up, or has a suggestion or comment. Just write to me at 1180 Alvarado SE, #114, Albuquerque, New Mexico, 87108. Nothing would please me more than being able to hire ten programmers and deluge the hobby market with good software.

Bill Gates

General Partner, Micro-Soft


NSA statement does not deny 'spying' on members of Congress | World news | theguardian.com

$
0
0

Comments:" NSA statement does not deny 'spying' on members of Congress | World news | theguardian.com "

URL:http://www.theguardian.com/world/2014/jan/04/nsa-spying-bernie-sanders-members-congress


The National Security Agency on Saturday released a statement in answer to questions from a senator about whether it “has spied, or is … currently spying, on members of Congress or other American elected officials”, in which it did not deny collecting communications from legislators of the US Congress to whom it says it is accountable.

In a letter dated 3 January, Senator Bernie Sanders of Vermont defined “spying” as “gathering metadata on calls made from official or personal phones, content from websites visited or emails sent, or collecting any other data from a third party not made available to the general public in the regular course of business”.

The agency has been at the centre of political controversy since a former contractor, Edward Snowden, released thousands of documents on its activities to media outlets including the Guardian.

In its statement, which comes as the NSA gears up for a make-or-break legislative battle over the scope of its surveillance powers, the agency pointed to “privacy protections” which it says it keeps on all Americans' phone records.

The statement read: “NSA’s authorities to collect signals intelligence data include procedures that protect the privacy of US persons. Such protections are built into and cut across the entire process. Members of Congress have the same privacy protections as all US persons. NSA is fully committed to transparency with Congress. Our interaction with Congress has been extensive both before and since the media disclosures began last June.

“We are reviewing Senator Sanders’s letter now, and we will continue to work to ensure that all members of Congress, including Senator Sanders, have information about NSA’s mission, authorities, and programs to fully inform the discharge of their duties.”

Soon after Sanders' letter was published, the director of national intelligence, James Clapper, announced that the Foreign Intelligence Surveillance (Fisa) Court, the body which exists to provide government oversight of NSA surveillance activities, had renewed the domestic phone records collection order for another 90 days.

On Saturday, the New York Times published a letter from Robert Litt, in which the general counsel for the Office of National Intelligence denied allegations that Clapper lied to Congress in March, when questioned about NSA domestic surveillance.

Last month, two federal judges issued contradictory verdicts on whether such NSA surveillance was constitutional. Judge Richard Leon said it was not constitutional; Judge William Pauley said that it was.

What "viable search engine competition" really looks like.

$
0
0

Comments:"What "viable search engine competition" really looks like."

URL:http://blog.nullspace.io/building-search-engines.html


Hacker News is up in arms again today about the RapGenius fiasco. See RapGenius statement and HN comments. One response article argues that we need more “viable search engine competition” and the HN community largely seems to agree.

In much of the discussion, there is a picaresque notion that the “search engine problem” is really just a product problem, and that if we try really hard to think of good features, we can defeat the giant.

I work at Microsoft. Competing with Google is hard work. I’m going to point out some of the lessons I’ve learned along the way, to help all you spry young entrepreneurs who might want to enter the market.

(This should go without saying, but in no way was this edited or approved by my employer. Conclusions are drawn purely from public knowledge, and/or my own foolish hunches. Nothing is derived from office talk.)

Lesson 1: The problem is not only hiring smart people, but hiring enough smart people.

The Bing search relevance staff is a fraction the size of Google’s. And Google’s engineers are amazing. Making up for a small difference in size could be easy; competing with a brilliant workforce that is n times as large as yours is very hard, especially when n > 3, which in our case, it is.

What’s worse is that the problem is not even that we need to buy such a team. MS is rich enough that it could do that if it wanted to. The problem is finding enough people to build such a team. There are a limited number of search relevance engineers available, and many of them work for Google.

This is a constant problem for all who enter the field, and to be a viable threat to Google, you will need to take this into account and compensate somehow. Obviously we have our own strategies for dealing with this problem.

Lesson 2: competing on market share is possible; relevance is much harder

Bing holds somewhere around > 20% of market share according to publicly-available sources. Google is still the player to beat in the field, but this is no small chunk. It is obvious that some of this market share comes from reach we have through things like IE and Windows, and through public partners like Facebook. This sort of reach isn’t free, but it’s not nearly as difficult as getting good relevance scores.

And getting good relevance is hard, make no mistake. Consider that Bing has invested at least millions into just search relevance — I’m not even counting infrastructure here. Since there aren’t enough relevance engineers to go around, the only alternative is to make creative investments in this area. Certainly we have had no choice but to do this in order to get the reasonably good relevance ratings we have. In this sense, it is possible to get good mileage out of the team with the right strategy, as Bing has, though it is certainly not a given.

Still, some dissonance exists here: the difference in search quality — perceived or real, it doesn’t matter — is noticeable to some subset of the people who use search regularly. As an entrepreneur you will have to confront that: how would you make these investments differently, and how much money would you need to do it correctly?

At this point, I’m honestly not sure that, given the goal of producing a scale search engine, we could have been done this much better. I think the only other option might have been to try an entirely different attack vector. Either way, if you try this yourself, you will see that this is a very hard, maybe impossible, gap to bridge directly. Entrepreneurs should plan accordingly.

Lesson 3: social may pose an existential threat to Google’s style of search

Google, like all search companies, is entasked with providing an easy way to access information that’s important to people.

But the information people seem to care about the most is locked away in social sites like Facebook, or at least is only derivable from information that is locked in those websites. This is inaccessible to Google. Since they rolled out G+, they must think this is a credible threat, so they will probably keep an eye on you if you approach from this angle.

Lesson 4: large companies have access to technology that is often categorically better than OSS state of the art

A good example of this is NoSQL datastores. It is generally a huge struggle to stably deploy current OSS NoSQL solutions on a couple hundred, let alone a couple thousand nodes. Facebook gave up on Cassandra, and Twitter stopped trying to migrate after a couple years' effort.

In contrast, Amazon and Google both have stable deployments on clusters that are an order of magnitude larger than the largest known OSS NoSQL store (excepting maybe Riak).

Another problem is that these solutions tend to be developed end-to-end, so that they all fit together, and are designed to work together. This is not usually true of OSS, where you tend to cobble together lots of ill-fitting tools until your system starts limping along.

This should give you a sense of the scale at which these companies operate. Entrepreneurs should not expect to compete with the raw processing power of a company like Google. You will have to either be very smart about holes in their stack, or you will have to find another way.

(NOTE: this is not to say that OSS does not have its advantages, like the fact that some tools can be reused all over the place, and when you know how to stably deploy them, you can do so across your stack, for example.)

Lesson 5: large companies are necessarily limited by their previous investments

Big disclaimer here: this is my opinion and not something that’s MS-official.

People use computers primarily to access the Internet. MS is now a devices and services company, hence its main job is to provide the Internet to people as a service on MS devices.

If MS wants to maintain its position as a field leader, it can’t just be the OS and the browser used to access the Internet — it must be a substantive part of the service itself, which means that it needs to be the page people land on when they open their browser.

And that is what Bing is. For this reason, it is more important (for the moment) that Bing exists than it is that it is equal to or better than Google in every way. Of course, it is a huge priority to make Bing better, but this is not the only consideration MS must make.

Fortunately, a similar investment problem exists for all large companies, Google included with it’s G+. This is an advantage for entrepreneurs, and you would be wise to use it.

Practically, this means that at a startup, one could have spent more time getting a small but rabidly positive set of users, and built up an engine slowly, rather than simply jumping into feature parity. This is a distinct advantage for the entrepreneur.

Lesson 6: large companies have much more data than you, and their approach to search is sophisticated

IE can track, and has tracked users behavior even when they’re not on Bing. (We got in trouble for this once. Google says they don’t do that with Chrome, by the way.) Both Bing and Google will try to figure out things like how many times you pressed the back button, how many of the search results you visited for a particular query before you found what you wanted, and so on. This is standard behavior, as it helps search engines figure out what you were really searching for. Current players also all know about a lot of small details, like the fact that it’s really important to serve results fast. (There’s a talk by Marissa Meyer about this somewhere, but I forget where.)

Some more examples. MSR has published papers that indicate that it’s useful to track your behavior across tabs and based on where you point your mouse. It’s important to recognize that even if Bing doesn’t end up using all of MSR’s research, the fact that they’ve spent a lot of money doing the research means that they’ve tried and discarded a lot of things.

It also takes a lot of time and energy to iterate a new search algorithm. You discover some features, you put them in your model, you pilot your model, you use that model to discover more features, and so on. This has a compounding effect that is really hard to make up for if you are behind.

If you’re looking to enter the space, be aware that most traditional search problems have really been investigated thoroughly. Someone should either know vaguely what they’re doing, or else you should do something different. (Or else, everyone overlooked something really important somehow.)

Conclusions

While this is by no means a comprehensive list of the challenges of building such a competitive engine, it should at least give a flavor of the sorts of problems you will have to negotiate in some way.



Please enable JavaScript to view the comments powered by Disqus.comments powered by

Economists agree: Raising the minimum wage reduces poverty

$
0
0

Comments:"Economists agree: Raising the minimum wage reduces poverty"

URL:http://www.washingtonpost.com/blogs/wonkblog/wp/2014/01/04/economists-agree-raising-the-minimum-wage-reduces-poverty/


One funny part of watching journalists cover the minimum wage debate is that they often have to try and referee cutting-edge econometric debates. Some studies, notably those lead by UMass Amherst economist Arin Dube, argue that there are no adverse employment effects from small increases in the minimum wage. Other studies, notably those lead by University of California Irvine economist David Neumark, argue there is an adverse effect. Whatever can we conclude?

There are full-time and part-time openings at the Thai Kitchen restaurant in Silver Spring that pay the minimum wage. The openings represent the toughest part of the slow economic recovery occurring in the United States: Many of the new jobs are for low pay.
(Michael S. Williamson/The Washington Post)

But instead of diving into that controversy, let’s take a look at where these economists, and all the other researchers investigating the minimum wage, do agree: They all tend to think that raising the minimum wage would reduce poverty. That’s the conclusion of a major new paper by Dube, titled “Minimum Wages and the Distribution of Family Incomes.”

Let’s first highlight the major results. Dube uses the latest in minimum-wage statistics and finds a negative relationship between the minimum wage and poverty. Specifically, raising the minimum wage 10 percent (say from $7.25 to near $8) would reduce the number of people living in poverty 2.4 percent. (For those who thrive on jargon, the minimum wage has an “elasticity” of -0.24 when it comes to poverty reduction.)

Using this as an estimate, raising the minimum wage to $10.10 an hour, as many Democrats are proposing in 2014, would reduce the number of people living in poverty by 4.6 million. It would also boost the incomes of those at the 10th percentile by $1,700. That’s a significant increase in the quality of life for our worst off that doesn’t require the government to tax and spend a single additional dollar. And, given that this policy is self-enforcing with virtually no administrative costs while challenging the employer’s market power, it is a powerful complement to the rest of the policies the government uses to boost the living standards of the worst off, including the Earned Income Tax Credit, food stamps, Medicaid, etc.

Now, this is normally the part where we’d have to go through the counter-arguments, using different data and techniques from different economists, to argue that the minimum wage wouldn’t do this. But this is the fun part: Dube’s paper finds a remarkable consistency across studies here. For instance, in a 2011 paper by minimum-wage opponent David Neumark, raising the minimum wage 10 percent would reduce poverty 2.9 percent (an elasticity of -0.29) for 21-44-year-old family heads or individuals. That’s very similar to what Dube finds. Neumark doesn’t mention this directly in the paper however; Dube is able to back out this conclusion using other variables that are listed.

Indeed, Dube digs out the effects of the minimum wage on poverty from 12 different studies in the new wave of literature on the topic that started in the 1990s with David Card and Alan Krueger field-creating research. Of the 54 elasticities that Dube is able to observe in these 12 papers, 48 of them are negative. Only one study has a sizable positive one, a 2005 one by David Neumark, a study that stands out for odd methodology (it lacks state and yearly fixed effects, it assumes quantiles are moving in certain directions) that isn’t standard in the field or in his subsequent work. (Indeed, it is nothing like Neumark’s standard 2011 study, mentioned above, which finds that the minimum wage reduces poverty.) Including that study, there’s an average elasticity of -0.15 across all the studies; tossing it, there’s one of -0.20 across the 11 studies, similar to what Dube finds.

However these previous studies also have issues which Dube’s new study examines. This paper uses data up through 2012, so there is much more substantial variations to examine between states’ minimum wages compared to earlier studies from the 1990s. Meanwhile there are additional controls added, including those that deal with the business cycle as well as regional effects. The range of controls provide 8 different results, all of which are highlighted.

Now, as a general rule with these numbers, you should never observe too far away from the mean — that is, you shouldn’t take the effects of small changes to see what would happen if we, say, increased the minimum wage 500 percent, or to levels that don’t actually exist right now. But the results are promising.

Indeed, they are promising on three different measures of poverty. There’s the normal definition of poverty established in the 1960s as a result of how much food costs takes up in your family budget. But the relationship is both relevant and even stronger for the poverty gap, which is how far people are away from the poverty line, and the squared poverty gap, which is a focus on those with very low incomes. The elasticities here are -0.32 and -0.96 respectively, with the second having an almost one-to-one relationship because the minimum wage reduces the proportions of those with less than one-half the poverty line.

What should people take away from this? The first is that there are significant benefits, whatever the costs. If you look at the economist James Tobin in 1996, for instance, he argues that the “minimum wage always had to be recognized as having good income consequences….I thought in this instance those advantages outweighed the small loss of jobs.” Since then there’s been substantially more work done arguing that the loss of jobs is smaller or nonexistent, and now we know that the advantages are even better, especially when it comes to boosting incomes of the poorest and reducing extreme poverty.

The second is that this isn’t a thing that people proposing an inequality agenda just happened to throw on the table. A higher minimum wage is a substantial response to the challenges of inequality. Opponents of a higher minimum wage focus on the idea that it largely won’t benefit the worst off. However, look at this graphic from the study:

A higher minimum wage will lead to a significant boost in incomes for the worst off in the bottom 30th percent of income, while having no impact on the median household.

As many economists have argued, the minimum wage ”substantially ‘held up’ the lower tail of the U.S. earnings distribution” through the late 1970s, but this effect stopped as the real value of the minimum wage fell in subsequent decades. This gives us an empirical handle on how the minimum wage would help deal with both insufficient low-end wages and inequality, and the results are striking.

Charles Darwin once wrote, “If the misery of the poor be caused not by the laws of nature, but by our institutions, great is our sin.” One of the key institutions of the modern economy, the minimum wage, could dramatically reduce the misery of the poor. What would it say if we didn’t take advantage of it?

Mike Konczal is a fellow at the Roosevelt Institute, where he focuses on financial regulation, inequality and unemployment. He writes a weekly column for Wonkblog. Follow him on Twitter here.

Anatomy of a cheap USB to Ethernet adapter

$
0
0

Comments:"Anatomy of a cheap USB to Ethernet adapter"

URL:http://projectgus.com/2013/03/anatomy-of-a-cheap-usb-ethernet-adapter/#more-1431


Taking apart a very cheap USB to Ethernet adapter and pondering on the parts found inside.

Here are two USB to Ethernet adapters:

One of them is sold on ebay for $3.85 AU ($3.99 US), including postage to Australia. The other is sold at Apple Stores for $29.

In Linux they both use the driver for an “ASIX AX88772A” USB to Ethernet converter, even though the Apple one reports as “Apple” and is sold only for the MacBook Air.

The cheap adapter comes with drivers for OS X & Windows, as well.

When I ran a TCP throughput test with iperf, they both performed well. The Apple adapter measured throughput of 94.3Mbps. The cheap adapter measured 87.4Mbps. By comparison, the builtin ethernet on my laptop measured 94.8Mbps (after being set from gigabit to 100Mbps.)

There are a few unusual things about the cheap adapter, though. The Linux kernel log says:

usb 3-2.1.2: New USB device found, idVendor=0b95, idProduct=772a
usb 3-2.1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 3-2.1.2: Product: AX88x72A
usb 3-2.1.2: Manufacturer: ASIX Elec. Corp.
usb 3-2.1.2: SerialNumber: 000002
eth1: register 'asix' at usb-0000:0e:00.0-2.1.2, ASIX AX88772 USB 2.0 Ethernet, 00:8a:8d:8a:39:2b

Serial number “000002″? Hmm…

Also, the hardware MAC address prefix (00:8a:8d) isn’t any known vendor OUI (organisationally unique identifier.) Seems odd, although chipset vendors (like ASIX) often require the device manufacturer to register their own OUI (for instance the Apple adapter uses an Apple prefix.) For a no-name vendor, it makes sense to just make one up.

Here’s a view inside the cheap adapter:

Inside the Apple adapter:

It seems $29 buys you shielding from interference. Underneath the cover:

Some similarities are visible:

  • Both adapters have a 512 byte EEPROM onboard (Atmel AT93C66B.)
  • Both adapters have an Ethernet transformer to isolate the ethernet signals from the rest of the board (MEC TM1701M & LFE8423, respectively.)

The biggest difference is this: the Apple adapter contains a clearly labelled ASIX AX88772ALF USB to Ethernet bridge, the other adapter has an unmarked chip that is not made by ASIX.

ASIX don’t make any USB/Ethernet devices in a 32 pin package, their smallest package has 64 pins, same as the genuine one above.

It’s a clone!

Cloning ASICs isn’t new to the computer world. They’ve been around since the early 80s, maybe earlier.

This is not a simple copy, though. It’s been modified to make it much cheaper to produce. It might even be a brand new ASIC, created from scratch to be compatible.

To know if it is a modified knockoff or made from scratch we could dissolve the chip packages in acid (“decapping” them) and then look at the exposed silicon die under a microscope. Not quite that keen yet, though.

The genuine AX88772A requires two crystal oscillators for accurate timing – a 25MHz oscillator for the ethernet interface clock, and a 12MHz oscillator for the USB interface clock. The cheap adapter only has a single 25MHz crystal.

How much does this save? A lot. The Apple adapter has two “TXC” brand oscillators (the shiny silver packages.) These cost around $1.33 each if you buy 15,000. This wholesale cost alone is nearly the retail price of the cheap adapter on taobao.com.

By comparison, a single large crystal resonator like the one on the cheap adapter costs $0.16 from the same source (6% 20%, see below, of the cost.)

EDIT: Someone on reddit pointed out that I was using ‘oscillator’ incorrectly to refer to the large crystal, which is only the resonator part of an oscillator circuit. Fixed!

EDIT 2: Gerard points out in the comments that the two TXC chips are probably just crystal resonators (not full oscillators) as well. I checked the reference schematic for ASIX’s Demoboard and he’s right. Thanks Gerard! So the cheapest ones they could be are around $0.40 each (12MHz/25MHz), the cheaper board’s single crystal is then closer to 20% of the cost.

In the genuine chip, the 12MHz frequency would be used to derive 480MHz for USB 2.0 High Speed by using a PLL frequency multipler circuit. 12 x 40 = 480.

In the cheap one, I’m guessing the host’s packet sync must be used to synchronise the 480MHz clock. Without a high speed oscilloscope or a USB 2.0 logic analyser it’d be hard to tell how well this goes at meeting the 480MHz +/-500ppm requirement of the USB High Speed spec. ASIX themselves warn that if their precision 12MHz clock source has an incorrect capacitor it “can cause some problem during USB High Speed mode enumeration. For example, during the 100 times of repeatedly plug-and-unplug test, there may be 1 time that AX88772 may not be initialized properly“. You’d imagine this kind of problem is potentially much worse in the cheap adapter, although I haven’t yet noticed anything.

The Apple adapter also has many more small components – two inductors (the cheap adapter has none), over twenty five capacitors (the cheap adapter has only nineteen), more resistors. For the cheap adapter design, every fraction of a cent saved is important!

One thing that surprised me is that the cheap adapter has a functioning blue activity LED, that glows through the enclosure. The Apple adapter actually has a space on the PCB for this, but no LED in place (Apple’s designers presumably nixed it for aesthetic reasons.) I’m surprised the manufacturer paid the few cents to add this feature.

For the manufacturer of cloned/compatible ASICs, an interesting bonus is driver support. The CD that came with the cheap adapter contains ASIX’s own drivers for Windows and OS X.

The Windows drivers are the exact same digitally signed ones that Microsoft distributes through Windows Update, meaning the adapters appear to have “passed” Windows Hardware Quality Labs testing. Something the actual device manufacturer surely couldn’t have afforded.

In my simple tests both adapters seemed perfectly reliable, moving data back and forth quickly without any measurable errors. I ran an eight hour two-way ping flood (assuring plenty of collisions) with zero lost packets.

However, the cheap adapter is probably susceptible to (and a producer of) electrical noise. The Apple adapter is protected from an electrically noisy environment by its metal shielding, and extra decoupling capacitors on the board.

On the other hand, both USB and Ethernet contain mechanisms for dealing with errors introduced by interference. It’s possible the protocols are well engineered enough that you’ll never notice the difference.

It is also possible that the cloned ASIC will display hardware bugs that aren’t in the legitimate adapter. So far I haven’t found any (I tested some of the uncommon features like forcing 10Mbps, forcing half duplex, adjusting MTU.) I’d be interested to hear of any, though.

I’m probably unusual in that I find this world of cheap clone “shanzhai” hardware amazing. I’m fascinated that someone is out there redesigning existing silicon to make a knockoff that is smaller, cheaper but otherwise near-equivalent – to be used in devices that retail for less than $3.

I’d love to learn more about these secretive industries and the engineers who work in them.

How about you? Do you have any experience with dirt cheap hardware devices? How do you feel about shanzhai competing with regular firms’ R&D by cloning their hardware? Please leave a comment and let me know what you think.

Python 3.3: Trust Me, It's Better Than Python 2.7 by Dr. Brett Cannon // Speaker Deck

New York State Is Set to Loosen Marijuana Laws - NYTimes.com

$
0
0

Comments:"New York State Is Set to Loosen Marijuana Laws - NYTimes.com"

URL:http://www.nytimes.com/2014/01/05/nyregion/new-york-state-is-set-to-loosen-marijuana-laws.html?smid=tw-bna&_r=0


ALBANY — Joining a growing group of states that have loosened restrictions on marijuana, Gov. Andrew M. Cuomo of New York plans this week to announce an executive action that would allow limited use of the drug by those with serious illnesses, state officials say.

The shift by Mr. Cuomo, a Democrat who had long resisted legalizing medical marijuana, comes as other states are taking increasingly liberal positions on it — most notably Colorado, where thousands have flocked to buy the drug for recreational use since it became legal on Jan. 1.

Mr. Cuomo’s plan will be far more restrictive than the laws in Colorado or California, where medical marijuana is available to people with conditions as mild as backaches. It will allow just 20 hospitals across the state to prescribe marijuana to patients with cancer, glaucoma or other diseases that meet standards to be set by the New York State Department of Health.

While Mr. Cuomo’s measure falls well short of full legalization, it nonetheless moves New York, long one of the nation’s most punitive states for those caught using or dealing drugs, a significant step closer to policies being embraced by marijuana advocates and lawmakers elsewhere.

New York hopes to have the infrastructure in place this year to begin dispensing medical marijuana, although it is too soon to say when it will actually be available to patients.

Mr. Cuomo’s shift comes at an interesting political juncture. In neighboring New Jersey, led by Gov. Chris Christie, a Republican whose presidential prospects are talked about even more often than Mr. Cuomo’s, medical marijuana was approved by his predecessor, Jon S. Corzine, a Democrat, but was put into effect only after Mr. Christie set rules limiting its strength, banning home delivery, and requiring patients to show they have exhausted conventional treatments. The first of six planned dispensaries has already opened.

Meanwhile, New York City’s new mayor, Bill de Blasio, had quickly seemed to overshadow Mr. Cuomo as the state’s leading progressive politician.

For Mr. Cuomo, who has often found common ground with Republicans on fiscal issues, the sudden shift on marijuana — which he is expected to announce on Wednesday in his annual State of the State address — was the latest of several instances in which he has embarked on a major social policy effort sure to bolster his popularity with a large portion of his political base.

In 2011, he successfully championed the legalization of same-sex marriage in New York. And a year ago, in the aftermath of the mass school shooting in Newtown, Conn., Mr. Cuomo pushed through legislation giving New York some of the nation’s toughest gun-control laws, including a strict ban on assault weapons. He also has pushed, unsuccessfully so far, to strengthen abortion rights in state law.

The governor’s action also comes as advocates for changing drug laws have stepped up criticism of New York City’s stringent enforcement of marijuana laws, which resulted in nearly 450,000 misdemeanor charges from 2002 to 2012, according to the Drug Policy Alliance, which advocates more liberal drug laws.

During that period, medical marijuana became increasingly widespread outside New York, with some 20 states and the District of Columbia now allowing its use.

Mr. Cuomo voiced support for changing drug laws as recently as the 2013 legislative session, when he backed an initiative to decriminalize so-called open view possession of 15 grams or less. And though he said he remained opposed to medical marijuana, he indicated as late as April that he was keeping an open mind.

His shift, according to a person briefed on the governor’s views but not authorized to speak on the record, was rooted in his belief that the program he has drawn up can help those in need, while limiting the potential for abuse. Mr. Cuomo is also up for election this year, and polls have shown overwhelming support for medical marijuana in New York: 82 percent of New York voters approved of the idea in a survey by Siena College last May.

Still, Mr. Cuomo’s plan is sure to turn heads in Albany, the state’s capital. Medical marijuana bills have passed the State Assembly four times — most recently in 2013 — only to stall in the Senate, where a group of breakaway Democrats shares leadership with Republicans, who have traditionally been lukewarm on the issue.

Mr. Cuomo has decided to bypass the Legislature altogether.

Thomas Kaplan contributed reporting. This article has been revised to reflect the following correction:Correction: January 4, 2014An earlier version of a map with this article reversed the locations of North and South Dakota.

Features | ownCloud.org


Five Reasons Why You Should Probably Stop Using Antibacterial Soap | Surprising Science

$
0
0

Comments:"Five Reasons Why You Should Probably Stop Using Antibacterial Soap | Surprising Science"

URL:http://blogs.smithsonianmag.com/science/2014/01/five-reasons-why-you-should-probably-stop-using-antibacterial-soap/?utm_source=facebook.com&utm_medium=socialmedia&utm_campaign=01032014&utm_content=surprisingscienceantibacterialsoap


A few weeks ago, the FDA announced a bold new position on antibacterial soap: Manufacturers have to show that it’s both safe and more effective than simply washing with conventional soap and water, or they have to take it off the shelves in the next few years.

About 75 percent of liquid antibacterial soaps and 30 percent of bars use a chemical called triclosan as an active ingredient. The drug, which was originally used strictly in hospital settings, was adopted by manufacturers of soaps and other home products during the 1990s, eventually ballooning into an industry that’s worth an estimated $1 billion. Apart from soap, we’ve begun putting the chemical in wipes, hand gelscutting boards, mattress pads and all sorts of home items as we try our best to eradicate any trace of bacteria from our environment.

But triclosan’s use in home over-the-counter products was never fully evaluated by the FDA—incredibly, the agency was ordered to produce a set of guidelines for the use of triclosan in home products way back in 1972, but only published its final draft on December 16 of last year. Their report, the product of decades of research, notes that the costs of antibacterial soaps likely outweigh the benefits, and forces manufacturers to prove otherwise.

Bottom line: Manufacturers have until 2016 to do so, or pull their products from the shelves. But we’re here to tell you that you probably shouldn’t wait that long to stop using antibacterial soaps. Here’s our rundown of five reasons why that’s the case:

1. Antibacterial soaps are no more effective than conventional soap and water. As mentioned in the announcement, 42 years of FDA research—along with countless independent studies—have produced no evidence that triclosan provides any health benefits as compared to old-fashioned soap.

“I suspect there are a lot of consumers who assume that by using an antibacterial soap product, they are protecting themselves from illness, protecting their families,” Sandra Kweder, deputy director of the FDA’s drug center, told the AP. “But we don’t have any evidence that that is really the case over simple soap and water.”

Manufacturers say they do have evidence of triclosan’s superior efficacy, but the disagreement stems from the use of different sorts of testing methods. Tests that strictly measure the number of bacteria on a person’s hands after use do show that soaps with triclosan kill slightly more bacteria than conventional ones.

But the FDA wants data that show that this translates into an actual clinical benefit, such as reduced infection rates. So far, analyses of the health benefits don’t show any evidence that triclosan can reduce the transmission of respiratory or gastrointestinal infections. This might be due to the fact that antibacterial soaps specifically target bacteria, but not the viruses that cause the majority of seasonal colds and flus.

2. Antibacterial soaps have the potential to create antibiotic-resistant bacteria. The reason that the FDA is making manufacturers prove these products’ efficacy is because of a range of possible health risks associated with triclosan, and bacterial resistance is first on the list.

Heavy use of antibiotics can cause resistance, which results from a small subset of a bacteria population with a random mutation that allows it to survive exposure to the chemical. If that chemical is used frequently enough, it’ll kill other bacteria, but allow this resistant subset to proliferate. If this happens on a broad enough scale, it can essentially render that chemical useless against the strain of bacteria.

This is currently a huge problem in medicine—the World Health Organization calls it a “threat to global health security.” Some bacteria species (most notably, MRSA) have even acquired resistance to several different drugs, complicating efforts to control and treat infections as they spread. Health officials say that further research is needed before we can say that triclosan is fueling resistance, but severalstudies have hinted at the possibility.

3. The soaps could act as endocrine disruptors.  A number of studies have found that, in rats, frogs and other animals, triclosan appears to interfere with the body’s regulation of thyroid hormone, perhaps because it chemically resembles the hormone closely enough that it can bind to its receptor sites. If this is the case in humans, too, there are worries that it could lead to problems such as infertility, artificially-advanced early puberty, obesity and cancer.

These same effects haven’t yet been found in humans, but the FDA calls the animal studies “a concern”—and notes that, given the minimal benefits of long-term triclosan use, it’s likely not worth the risk.

4. The soaps might lead to other health problems, too. There’s evidence that children with prolonged exposure to triclosan have a higher chance of developing allergies, including peanut allergies and hay fever. Scientists speculate that this could be a result of reduced exposure to bacteria, which could be necessary for proper immune system functioning and development.

Another study found evidence that triclosan interfered with muscle contractions in human cells, as well as muscle activity in live mice and minnows. This is especially concerning given other findings that the chemical can penetrate the skin and enter the bloodstream more easily than originally thought. A 2008 survey, for instance, found triclosan in the urine of 75 percent of people tested.

5. Antibacterial soaps are bad for the environment. When we use a lot of triclosan in soap, that means a lot of triclosan gets flushed down the drain. Research has shown that small quantities of the chemical can persist after treatment at sewage plants, and as a result, USGS surveys have frequently detected it in streams and other bodies of water. Once in the environment, triclosan can disrupt algae’s ability to perform photosynthesis.

The chemical is also fat-soluble—meaning that it builds up in fatty tissues—so scientists are concerned that it can biomagnify, appearing at greater levels in the tissues of animals higher up the food chain, as the triclosan of all the plants and animals below them is concentrated. Evidence of this possibility was turned up in 2009, when surveys of bottlenose dolphins off the coast of South Carolina and Florida found concerning levels of the chemical in their blood.

What Should You Do?

If you’re planning on giving up antibacterial soap—like Johnson & Johnson, Kaiser Permanente and several other companies have recently done—you have a couple options.

One is a non-antibiotic hand sanitizer, like Purell, which don’t contain any triclosan and simply kill both bacteria and viruses with good old-fashioned alcohol. Because the effectiveness of hand-washing depends on how long you wash for, a quick squirt of sanitizer might be more effective when time is limited.

Outside of hospitals, though, the CDC recommends the time-tested advice you probably heard as a child: wash your hands with conventional soap and water. That’s because while alcohol from hand sanitizer kills bacteria, it doesn’t actually remove dirt or anything else you may have touched. But a simple hand wash should do the trick. The water doesn’t need to be hot, and you’re best off scrubbing for about 30 seconds to get properly clean.

Errata Security: Why we have to boycott RSA

$
0
0

Comments:"Errata Security: Why we have to boycott RSA"

URL:http://blog.erratasec.com/2014/01/why-we-have-to-boycott-rsa.html


The only thing stopping corporations from putting NSA backdoors into their products is the risk of getting caught. RSA got caught backdooring BSAFE. If nobody seems to care, if RSA doesn't suffer consequences, then nothing will stop other corporations from following suit.

RSA is the singular case. The Snowden leaks make us suspicious of other companies, like Google, Yahoo, Apple, Microsoft, and Verizon, but only with RSA do we have a "smoking gun". In some cases the companies had no choice (Verizon). In other cases, it appears that rather than cooperating with the government, the companies may in fact be yet another victim (Google). RSA is the standout that deserves our attention.

I mention this because people on Twitter are taking the stance that instead of boycotting RSA that we should attend their conference, to represent our views, to engage people in the conversation, to be "ambassadors of liberty". This is nonsense. It doesn't matter how many people you convince that what the RSA did is wrong if that doesn't change their behavior. If everyone agrees with you, but nobody boycotts RSA's products/services, then it sends the clear message to other corporations that there is no consequence to bad behavior. It sends the message to other corporations that if caught, all that happens is a lot of talk and no action. And since the motto is that "all PR is good PR", companies see this as a good thing.

The word to describe those who do business with the RSA, even while criticizing their backdoor, is "collaborator". This was the word used by the French ("colabo") to describe the members of the Vichy government who aided the invading Germans. Instead of giving up their positions of power, wealth, and prestige, members of the French government just kept doing their same job. Their reasoning was that they were really anti-German, but that they could do more good for the French people inside the occupation government than without. The French didn't buy this reasoning, and neither should you. Speakers who claim they can do more good collaborating with RSA, while speaking out against RSA, are still enjoying the speaking fees and the prestige of talking at a major conference.

Sadly, I haven't spoken at RSA in many years. Had I been accepted to talk this year, I'd certainly be canceling it. Moreover, I won't be talking or attending any future conference labeled "RSA" ever.

The reason isn't that I'm upset at RSA, or think that they are evil. I think RSA was mostly tricked by the NSA instead of consciously making the choice to backdoor their products. Instead, what I care about is sending the message to other corporations, that they should fear this sort of things happening to them. If you are a security company, and you get caught backdooring your security for the NSA, you should go out of business.

Web Standards Killed the HTML Star – JeffCroft.com

$
0
0

Comments:"Web Standards Killed the HTML Star – JeffCroft.com"

URL:http://jeffcroft.com/blog/2014/jan/03/web-standards-killed-the-html-star/


Blog entry // 01.03.2014 // 3:57 PM // 11 Comments

Web Standards Killed the HTML Star

When Zeldman wrote our bible, we were there, pounding the table in board rooms for using CSS instead of tables for layout, for image replacement techniques that retained the accessibility of the content, for semantic code, and all the rest.

We were there. Many of us even wrote our own books and spoke at conferences all over the world on the backs of the movement Zeldman and company started. It was an awesome time, and we all acomplished a lot for the greater good of the web. We were the “gurus” who taught the world how to do HTML and CSS the right way.

The reason the Web Standards Movement mattered was that the browsers sucked. The stated goal of the Movement was to get browser makers on board with web standards such that all of our jobs as developers would be easier.

What we may not have realized is that once the browsers don’t suck, being an HTML and CSS“guru” isn’t really a very marketable skillset. 80% of what made us useful was the way we knew all the quirks and intracries of the browsers. Guess what? Those are all gone. And if they’re not, they will be in the very near future. Then what?

A lot of folks who came up from that time and headspace have diversified their skillsets since. Many are now programmers, or project managers, or creative directors, or business owners. But a lot of others are still making a go of it as an HTML and CSS guru, often in a comfortable job they’ve had for years. What happens when that gig comes to an end?

I personally know several people who feel unequipped in today’s job market, because their skillset is a commodity now. Today, when you interview for a job titled “Front End Developer,” you’re going to be grilled on everything from Backbone to Angular to Node. Prefer “Product Designer?” That, too, requires a bunch of skills you didn’t learn on A List Apart. HTML and CSS gurumansship is no longer enough to get yourself a job — rather, it’s one of the quick yes/no questions you’re asked on the phone screen before you even get to talk to the hiring manager.

In some ways, the Web Standards Movement killed the Web Standards Guru. We all should have seen this coming. The goal of the Web Standards Movement was for it to not have to exist — for the browsers to be good enough that there wasn’t a need for such a movement.

Or rather, that there wasn’t a need for you. Diversify or die.

P.S.— I see a similar eventual outcome for all those who have made their names as a “social media guru.” Eventually, that’s just going to be a skillset that’s baked into all kinds of marketing positions — not something one can ride off into the sunset on the back of.

Idaho to take back control of privately run state prison | World news | theguardian.com

$
0
0

Comments:" Idaho to take back control of privately run state prison | World news | theguardian.com "

URL:http://www.theguardian.com/world/2014/jan/03/idaho-take-control-privately-run-state-prison


Idaho's governor says the corrections department will take over operation of the largest privately run prison in the state after more than a decade of mismanagement and other problems at the facility.

Nashville-based Corrections Corporation of America has contracted with the state to run the prison since it was built in 1997. Taxpayers currently pay CCA $29m per year to operate the 2,080-bed prison south of Boise.

Governor C L "Butch" Otter made the announcement Friday at a preview of the upcoming legislative session.

For years, Otter has been a champion of privatizing certain sectors of government, including prisons.

In 2008, he floated legislation to change state laws to allow private companies to build and operate prisons in Idaho and import out-of-state inmates. In 2008, he suggested privatizing the 500-bed state-run Idaho Correctional Institution-Orofino.

The CCA prison has been the subject of multiple lawsuits alleging rampant violence, understaffing, gang activity and contract fraud by CCA.

CCA acknowledged last year that falsified staffing reports were given to the state showing thousands of hours were staffed by CCA workers when the positions were actually vacant. And the Idaho state police is investigating the operation of the facility for possible criminal activity.

A federal judge also has held CCA in contempt of court for failing to abide by the terms of a settlement agreement reached with inmates in a lawsuit claiming high rates of violence and chronic understaffing at the prison.

Meanwhile, Idaho prison officials, led by IDOC director Brent Reinke, have lobbied to allow the agency to put together its own proposal and cost analysis for running the prison. Each time, however, Reinke and his staff have been rebuffed by the state board of correction.

Recently, board chairwoman Robin Sandy said she opposed the idea because she didn't want to grow state government.

Life After Amazon

$
0
0

Comments:"Life After Amazon"

URL:http://publishersweekly.com/pw/by-topic/columns-and-blogs/soapbox/article/60517-life-after-amazon.html


In January 2012, one of the sales representatives for my company, Educational Development Corp., made a call on a school that was unfamiliar with our books. As a result of the presentation, the school committed to a purchase, but a few days later the rep discovered it had placed the order with Amazon instead. This was particularly distressing to me: Selling 101 dictates that the person who makes the sale is the one who should be compensated. Recently, I heard from a longtime customer in Mill Valley, Calif., that closed its doors because it had effectively become a “showroom” for Internet retailers—people would see books in the store and then buy them online.

Amazon continues to trounce many of its bookselling rivals, in part because of deep discounting and sales tax exemptions (in most states). But, for all intents and purposes, Amazon does not make money. Amazon lost money in 2012, and in the most recent quarter it reported a virtually meaningless margin relative to its size. As business journalist Matthew Yglesias wrote in Slate’s Moneybox blog in early 2013, “Amazon, as far as I can tell, is a charitable organization being run by elements of the investment community for the benefits of consumers.”

In late February, 2012, after months of deliberation, taking a variety of factors into account, I made the difficult decision to stop selling our Kane Miller and Usborne books on Amazon. It was a bold move (or a misguided one, depending on your point of view), with little support in an industry where many were experiencing record growth through Amazon sales.

But, as I subsequently told the New York Times, I had long felt that the rapid growth of Amazon was bad news for the publishing industry. And I believed that Amazon was trying to gain control of publishing and other industries by making it impossible for other retailers to compete effectively (or, in light of recent developments and price wars, impossible for them to compete at all.) I still believe this—just as I believe in healthy competition and a level playing field, just as I feel that unbridled enthusiasm for short-term growth can often be very shortsighted.

According to figures that appeared in Salon.com, “roughly 60% of book sales—print and digital—now occur online. But buyers first discover their books online only about 17% of the time. Internet booksellers specifically, including Amazon, account for just 6% of discoveries. Where do readers learn about the titles they end up adding to the cart on Amazon? In many cases, at bookstores.”

In EDC’s case, discovery is also attributable to our direct-selling division. I believe that recommendations from friends, mentions in media, and other sources generate more sales than most people think, and that the personal touch still matters a great deal.

EDC has supplied Usborne books to bookstores, toy stores, gift shops, and museum shops for over 30 years (Kane Miller books have been a part of EDC for the last five years). In addition to our retail sales division, EDC operates a division that sells direct to consumers, as well as to schools and libraries, supports fund-raising and book fairs, and offers a matching grant program for organizations. I believe in the free market system, which gives consumers the choice of where, when, and from whom to buy, but I also strongly support the Fair Tax Amendment, which will force Amazon and other online retailers to collect state sales taxes, leveling the playing field between them and bricks-and-mortar retailers. While many states and municipalities are facing fiscal crises, it is estimated that they are losing a total of $23 billion in sales taxes each year due to Internet purchases, and these purchases are also destroying local tax-paying businesses.

It has been nearly two years since the Amazon decision and I can proudly report that our company is still alive, well, and prospering. Our direct-selling division has recorded seven consecutive months of year-over-year growth, and new sales force hires are up 25% over last year as well. And, thanks to our loyal retail customers, we recorded the largest ever sales month in the history of that division in October—and we did it without Amazon!

I believe the marketplace is large enough to accommodate many different sales channels, and I choose to lend my support to those who actually sell our products, whether it is our retail customers or our thousands of direct-sales representatives. All we need is a level playing field, and there will be a great “life after Amazon.”

Randall White is the chairman, CEO, and president of Educational Development Corp.

True facts about Ocean Radiation and the Fukushima Disaster | Deep Sea News

$
0
0

Comments:"True facts about Ocean Radiation and the Fukushima Disaster | Deep Sea News"

URL:http://deepseanews.com/2013/11/true-facts-about-ocean-radiation-and-the-fukushima-disaster/


On March 11th, 2011 the Tōhoku earthquake and resulting tsunami wreaked havoc on Japan. It also resulted in the largest nuclear disaster since Chernobyl when the tsunami damaged the Fukushima Daiichi Nuclear Power Plant. Radioactive particles were released into the atmosphere and ocean, contaminating groundwater, soil and seawater which effectively closed local Japanese fisheries.

Rather unfortunately, it has also led to some wild speculation on the widespread dangers of Fukushima radiation on the internet. Posts with titles like “Holy Fukushima – Radiation From Japan Is Already Killing North Americans” and ”28 Signs That The West Coast Is Being Absolutely Fried With Nuclear Radiation From Fukushima” (which Southern Fried Science has already throughly debunked ) keep popping up on my facebook feed from well-meaning friends.

I’m here to tell you that these posts are just plain garbage. While there are terrible things that happened around the Fukushima Power Plant in Japan; Alaska, Hawaii and the West Coast aren’t in any danger.  These posts were meant to scare people (and possibly written by terrified authors). They did just that, but there is a severe lack of facts in these posts. Which is why I am here to give you the facts, and nothing but the facts.

WHAT WAS RELEASED INTO THE OCEAN AT FUKUSHIMA?

The radioactive rods in the Fukushima power plant are usually cooled by seawater [CORRECTION: they are usually cooled by freshwater. As a last ditch emergency effort at Fukushima seawater was used as a coolant.]. The double whammy of an earthquake and a tsunami pretty much released a s**tstorm of badness: the power went out, meltdown started and eventually the radioactive cooling seawater started leaking (and was also intentionally released) into the ocean. Radioactive isotopes were also released into the air and were absorbed by the ocean when they rained down upon it. These two pathways introduced mostly Iodine-131, Cesium-137, and Cesium-134, but also a sprinkling of Tellurium, Uranium and Strontium to the area surrounding the power plant.

There aren’t great estimates of how much of each of these isotopes were released into the ocean since TEPCO, the company that owns the power plant hasn’t exactly been forthcoming with information, but the current estimates are around 538,100 terabecquerels (TBq) which is above Three-Mile Island levels, but below Chernobyl levels. And as it turns out, they recently found contaminated groundwater has also started leaking into the sea. TEPCO, the gift that keeps on giving.

WHAT’S A BEQUEREL? WHAT’S A SIEVERT?

Units of Radiation are confusing. When you start reading the news/literature/blogs, there are what seems like a billion different units to explain radiation. But fear not, I’ve listed them below and what they mean (SI units first).

Becquerel[Bq] or Curie[Ci]: radiation emitted from a radioactive material  (1 Ci = 3.7 × 1010 Bq)

Gray [Gy] or Rad[rad]: radiation absorbed by another material (1Gy = 100 rad)

Sieverts[Sv]* or “roentgen equivalent in man”[rem]: how badly radiation will damage biological tissue (1 Sv = 100 rem)

You can convert from Grays and Rads to Rem and Sieverts, but you have to know what kind of radiation it is. For example alpha radiation from naturally occurring Polonium-210 is more damaging to biological tissues than gamma radiation from Cesium-137. Even if you absorbed the same number of Grays from Cesium or Polonium, you would still effectively receive more damaging radiation from Polonium because the number of Sieverts is higher for Polonium than Cesium. And kids, Sieverts and Seavers  are both dangerous to your health but please don’t confuse them.

WHAT’S CESIUM-137?

Cesium-137 is product of nuclear fission. Before us humans, there was no Cesium-137 on earth. But then we started blowing stuff up with nuclear bombs and VOILA!, there are now detectable, but safe, levels of Cesium-137 in all the world oceans.

WHAT DO THE MAPS OF FUKUSHIMA RADIATION IN THE PACIFIC REALLY TELL US?

There are a bunch of maps being thrown around on the internet as evidence that we are all going to die from Fukushima radiation. I’m going to dissect them here. Apologies in advance for dose of snark in this section because some of these claims are just god awful. Spoiler: radiation probably has reached the West Coast but it’s not dangerous.

MAP OF TERROR #1: The Rays of Radioactive Death!

This is not a map of Fukushima Radiation spreading across the Pacific. This is a map of the estimated maximum wave heights of the Japanese Tohuku Tsunami by modelers at NOAA. In fact, tsunamis don’t even transport particles horizontally in the deep ocean. So there is no way a Tsunami could even spread radiation (except maybe locally at scales of several miles as the wave breaks onshore). Dear VC reporter, I regret to inform you this cover image could be the poster child for the importance of journalistic fact-checking for years to come.

MAP OF TERROR #2: EHRMAGHAD radioactive SPAGHATTA NADLES attack Hawaii!

I mean I guess this is a bit better. At least this map used an ocean model that actually predicts where radioactive particles will be pushed around by surface ocean currents. But it still gets a BIG FAT FAIL. The engineering company that put this image/piece of crap out there couldn’t even be bothered to put a legend on the map. Their disclaimer says “THIS IS NOT A REPRESENTATION OF THE RADIOACTIVE PLUME CONCENTRATION.” Then what do the colors mean?

MAP OF TERROR #3: THE BLOB! 

It’s true, oceanographic models have shown that radiation from Fukushima has probably already hit Aleutians and Hawaiian Island chain, and should reach the California Coast by Fall 2014 [Beherns et al. 2012]. The map above is showing the spread of Cesium-137 from the Fukushima reactor would look like right now, I mean radiation is apparently EVERYWHERE! But what is missing from most of the discussion of these maps is what  the colors ACTUALLY mean.

We shall now seek guidance from the little box in the upper right hand corner of the map called the legend**.  The colors show how much less radioactive the the decrease in the radioactive concentrations of Cesium-137 isotopes have become since being emitted from Fukushima. For example, the red areas indicate the Fukushima Cesium-137 is now more than 10,000 times less radioactive concentrated than when released. The California Coast, more than a million times less. The punchline is that overall concentrations of radioactive isotopes and therefore radioactivity in the Pacific will increase from Pre-Fukushima levels, but it will be way less than what was seen in coastal Japan and definitely not enough to be harmful elsewhere (we’ll get to more of that later).

** As Eve Rickert has thoughtfully pointed out, my description of the image is a little confusing. I’ve added corrections in blue to clarify.

HOW MUCH RADIATION WILL REACH THE WEST COAST?

Practically, what does ten thousand or a million times less radiation mean? It means that these models estimate the West Coast and the Aleutians will see radiation levels anywhere from 1-20 Bq/m3,while Hawaiian Islands could see up to 30 Bq/m3 [Beherns et al. 2012, Nakano et al. 2012,  Rossi et al. 2013 ].

I could write a small novel explaining why the numbers differ between the models. For those that love the details, here’s a laundry list of those differences: the amount of radiation initially injected into the ocean, the length of time it took to inject the radiation (slowly seeping or one big dump), the physics embedded in the model, the background ocean state, the number of 20-count shrimp per square mile (Just kidding!), atmospheric forcing, inter-annual and multi-decadal variability and even whether atmospheric deposition was incorporated into the model.

Like I said before, the West Coast will probably not see more than 20 Bq/m3 of radiation. Compare these values to the map of background radiation of Cesium-137 in the ocean before Fukushima (from 1990). Radiation will increase in the Pacific, but it’s at most 10 times higher than previous levels, not thousands. Although looking at this map I would probably stop eating Baltic Herring fish oil pills and Black Sea Caviar (that radiation is from Chernobyl) before ending the consumption of  fish from the Pacific Ocean.

[source: http://www.whoi.edu/page.do?pid=83397&tid=3622&cid=94989]

WILL THE RADIATION REACHING THE WEST COAST BE DANGEROUS?

No it will not be dangerous. Even within 300 km of Fukushima, the additional radiation that was introduced by the Cesium-137 fallout is still well below the background radiation levels from naturally occurring radioisotopes. By the time those radioactive atoms make their way to the West Coast it will be even more diluted and therefore not dangerous at all.

It’s not even dangerous to swim off the coast of Fukushima. Buessler et al. figured out how much radiation damage you would get if you doggie paddled about Fukushima (Yes, science has given us radioactive models of human swimmers). It was less than 0.03% of the daily radiation an average Japanese resident receives. Tiny! Hell, the radiation was so small even immediately after the accident scientists did not wear any special equipment to handle the seawater samples (but they did wear detectors just in case). If you want danger, you’re better off licking the dial on an old-school glow in the dark watch.

CAN I EAT FISH FROM THE PACIFIC?

For the most part the answer is YES. Some fisheries in Japan are still closed because of radioactive contamination. Bottom fish are especially prone to contamination because the fallout collects on the seafloor where they live. Contaminated fish shouldn’t be making it to your grocery store, but I can’t guarantee that so if you are worried just eat fish from somewhere other than Japan.

Fish from the rest of the Pacific are safe. To say it mildly, most fish are kinda lazy. They really don’t travel that far so when you catch a Mahi Mahi off the coast of Hawaii its only going to be as contaminated as the water there, which isn’t very much.Hyperactive fish, such as tuna may be more radioactive than local lazy fish because they migrate so far. As Miriam pointed out in this post, there is a detectable increase of radiation in tuna because they were at one point closer to Fukushima, but the levels are not hazardous.

To alleviate fears that you may be glowing due to ingestion too many visits to your local sushi joint, Fischer et al. figured out exactly how much damaging radiation you would receive from eating a tower of tuna rolls. Seriously. Science is just that awesome. Supermarket tuna hunters would receive 0.9 μSv of radiation, while the outdoors subsistence tuna hunter would receive 4.7 μSv. These values are about the same or a little less than the amount a person receives from natural sources.

To put 0.9 μSv of radiation in perspective check out this awesome graph of radiation by xkcd. You’ll get the same amount of radiation by eating 9 bananas. Monkeys might be doomed, but you are not.

I EAT PACIFIC FISH AND SO CAN YOU!

I hope this list of facts has answered most of your questions and convinced you the Pacific and its inhabitants will not be fried by radiation from Fukushima. I certainly feel safe eating sustainable seafood from the Pacific and so should you. If you are still unsure, please feel free to ask questions in the comments section below.

UPDATE #1: CONTRIBUTIONS FROM GROUNDWATER LEAKS

There’s been a lot of discussion in the comments about the contribution from the groundwater leaks. I did some homework and here’s what I came up with. (Also thanks to everyone for the interesting discussions in the comments!)

The ground water leaks are in fact problematic, but what has been released into the ocean is MUCH less than the initial release (although I admit the groundwater itself has extremely high radiation levels).  The estimates from Jota Kanda are that 0.3 TBq per month (1012 Bq) of contaminated groundwater is leaking into the ocean, which has added another 9.6 TBq of radiation into the sea at most.  The initial releases were about 16.2 PBq (1015 Bq), about 1500 times more radiation. With this in mind, the additional radioactivity leak from ground water isn’t a relatively large addition to the ocean.

The models by Behrens and Rossi used initial source functions of 10 PBq and 22 PBq, which is on par with the most recent estimates.  Since their models used a much higher source function, that says to me that this relatively smaller input from groundwater still won’t raise the radioactivity to dangerous levels on the West Coast, Alaska and Hawaii.  Recent observations around Hawaii by Kamenik et al. also suggest that the models may have even overestimated the amount of radiation that hit Hawaii, which is good news.

But there are caveats to this information as well. The leaking groundwater contains strontium and tritium which are more problematic than Cesium-137. But it sounds like strontium accumulates in bones and is only problem if you eat small fish with the bones in, like sardines (and it will only affect sardines caught near Japan since they don’t travel far). I suspect there might be some precedent for understanding the dangers of tritium in seawater from the 20th century nuclear testing in atolls, but I really don’t know. There is also 95 TBq of radioactive cesium is in the sediment around Fukushima, which is still super problematic for bottom dwelling fish and therefore local Japanese Fisheries. Lastly, another source is terrestrial runoff. These numbers haven’t been quantified but they are probably minor because they contain a fraction of the total deposition from atmospheric fallout, which itself was a fraction of what was released into the ocean.

So even with the new groundwater leaks, the available evidence still tells me I can eat fish from the West Coast, Hawaii, and Alaska.

http://www.nature.com/news/ocean-still-suffering-from-fukushima-fallout-1.11823

http://www.biogeosciences.net/10/6045/2013/bg-10-6045-2013.pdf

http://newswatch.nationalgeographic.com/2013/09/11/fukushima-fallout-not-affecting-u-s-caught-fish/

[DISCLAIMER: The creators of the NOAA tsunami map work in my building. I secretly fangirl squeal when I walk past their offices. I recently had coffee with Joke F. Lübbecke, who also works in my building. It was caffeinated.]

*Confusingly, oceanographers also co-opted the acronym Sv for Sverdrups their unit for volume transport. 1 Sverdrup = 1 Sv = one million cubic metres per second = 400 Olympic swimming pools just passed your house in one second.

SOURCES:

Behrens, Erik, et al. “Model simulations on the long-term dispersal of 137Cs released into the Pacific Ocean off Fukushima.” Environmental Research Letters 7.3 (2012): 034004.

Buesseler, Ken O., et al. “Fukushima-derived radionuclides in the ocean and biota off Japan.” Proceedings of the National Academy of Sciences 109.16 (2012): 5984-5988.

Fisher, Nicholas S., et al. “Evaluation of radiation doses and associated risk from the Fukushima nuclear accident to marine biota and human consumers of seafood.” Proceedings of the National Academy of Sciences (2013).

Nakano, Masanao, and Pavel P. Povinec. “Long-term simulations of the 137 Cs dispersion from the Fukushima accident in the world ocean.“ Journal of environmental radioactivity 111 (2012): 109-115.

Rossi, Vincent, et al. “Multi-decadal projections of surface and interior pathways of the Fukushima Cesium-137 radioactive plume.“ Deep Sea Research Part I: Oceanographic Research Papers (2013).

Woods Hole Oceanographic Institution FAQ: Radiation from Fukushima

Explained: rad, rem, sieverts, becquerelsl. A guide to terminology about radiation exposure

 

Share the post "True facts about Ocean Radiation and the Fukushima Disaster"

Article 10


My 2014 resolution: stop my country from becoming a surveillance state | Dan Gillmor | Comment is free | theguardian.com

$
0
0

Comments:" My 2014 resolution: stop my country from becoming a surveillance state | Dan Gillmor | Comment is free | theguardian.com "

URL:http://www.theguardian.com/commentisfree/2014/jan/05/new-years-resolution-stop-nsa-surveillance-state


Employees inside the joint special operations command at National Security Agency (NSA) headquarters in Fort Meade, Maryland. Photograph: Brooks Kraft/Corbis

Our New Year's resolutions tend to be well-meaning and hard to keep. That's because we resolve to change our lives in fundamental ways – get fit, etc. But inertia and habit are the enemy of change, and we usually fall back into old patterns. It's human nature.

Despite all that, I've made a resolution for 2014. It is to do whatever I can to reverse my country's trajectory toward being a surveillance state, and to push as hard as possible for a truly open internet.

I realize I can't do much on my own, and hope many others, especially journalists, will join in. This year may be pivotal; if we don't make progress, or worse, lose ground, it may be too late.

Thanks to whistleblowers, especially Edward Snowden, and the journalists who've reported on what they've been shown, the citizens of many countries have a far better idea than before about the extent to which security and law enforcement services have invaded their lives. We've learned about the stunning capabilities of the National Security Agency and others to create a real-life Panopticon, spying on and recording everything we say and do. We've learned that they abuse their powers– because that is also human nature – and lie incessantly, even to the people who are supposed to keep them in check. And we've learned that the technology industry is, if not in bed with the surveillance state, its chief arms dealer.

Meanwhile, the telecom industry – and its corporate and political allies – have been working hard to turn the internet into just an enhanced form of cable television. They are trying to end any vestige of what's come to be known as "network neutrality", the idea that we users of the internet, not the corporate middlemen, should make the decisions about what bits of information get delivered to our devices.

These forces of centralized control are pushing laws and policies that amount to an abrogation of free speech and free assembly. They don't just chill our ability to communicate. They also threaten innovation and our economy.

Yet there has also been some progress. Earlier whistleblowers who didn't have Snowden's documentation have been vindicated. Members of Congress who've been warning, obliquely, about what was happening have been proven right, and have made dramatic inroads with colleagues who want to rein in some of the spying. Several technology companies are claiming to be outraged by what's been going on, and say they're taking steps to bolster their customers' and users' security. Several federal judges have chosen to uphold their oath of office by ruling that some of the NSA's activities violate the constitution. Public opinion is evolving.

The open internet isn't dead, either. The new head of the Federal Communications Commission has said he wants to protect net neutrality (though he's also made troubling suggestions about fast lanes for certain content). New technology initiatives are emerging that could help protect us as well, such as the Open Technology Institute's just-launched Commotion project to create community-based "mesh networks" that carriers can't control.

Progress, but not nearly enough. The heads of the privacy-destroying agencies have made it absolutely clear that they don't have the slightest intention of moderating their activities; they plan, if anything, to accelerate their invasions of our lives. Meanwhile, most of the politicians who swear to "protect and defend the constitution" are, in fact, the surveillance abusers' chief defenders.

So what can Americans do, as individuals and together? A great deal, I believe.

We can call and write our members of Congress. As a resident of California, I recognize the futility of doing this with our senior Senator, Dianne Feinstein, who has made it her mission to protect and defend the NSA (FBI, CIA, DEA, et al) , and not the Bill of Rights. But I'm still letting her and other elected officials know my views on surveillance, the open internet and related matters.

I've never been a single-issue voter. Yet, I'm increasingly leaning toward making these the overriding issues on how I vote. I can now imagine supporting a candidate with whom I disagree on almost everything else but who vows to make them his or her top issues as well. (I also realize the risk of this approach – we've been lulled by politicians' false promises in the past.)

We can support organizations whose mission it is to protect our rights and work for open communications. Among the many in this category, two stand out for their essential and often effective work: the American Civil Liberties Union and the Electronic Frontier Foundation. On Thursday, the ACLU said it will appeal one judge's awful ruling to uphold the NSA's indiscriminate collection of phone data.

The EFF and ACLU are, as noted, just two of the many organizations and lobbies that are working on our behalf. Do some homework, and make your own decisions on who deserves your support. They are leverage, and they need our help.

We can do more to make liberty and security part of our own use of technology. I use encryption wherever possible, for example, and keep my software up to date. I also use Linux and other non-proprietary software wherever possible.

I also want to make a special appeal here to journalists. A few of you have done fantastic work in recent months in exposing the growing surveillance state. And a few of you have been paying attention in recent years to the growing control-freakery that threatens the open internet.

We need more of you to jump into these arenas, pronto. You have a selfish reason to do this; surveillance and a controlled internet will destroy your ability to do your jobs properly. But you have a higher calling as well. Journalism at its best is about holding powerful people and institutions accountable. When you do your jobs, you serve the people who need to know what is being done with their money and in their names, and who need information to make sound decisions in all aspects of their lives.

As an incurable optimist, I'm hoping this will be the year we look back on as a vital phase of America's recovery from its post September 11 insanity. It won't happen by itself, however. This cause needs our attention, and our work.

mOTP | mOTP Community Website - An Open Source Phone Verification System

Scaling Mercurial at Facebook | Engineering Blog | Facebook Code | Facebook

$
0
0

Comments:"Scaling Mercurial at Facebook"

URL:https://code.facebook.com/posts/218678814984400/scaling-mercurial-at-facebook/


With thousands of commits a week across hundreds of thousands of files, Facebook's main source repository is enormous--many times larger than even the Linux kernel, which checked in at 17 million lines of code and 44,000 files in 2013. Given our size and complexity—and Facebook's practice of shipping code twice a day--improving our source control is one way we help our engineers move fast.

Choosing a source control system

Two years ago, as we saw our repository continue to grow at a staggering rate, we sat down and extrapolated our growth forward a few years. Based on those projections, it appeared likely that our then-current technology, a Subversion server with a Git mirror, would become a productivity bottleneck very soon. We looked at the available options and found none that were both fast and easy to use at scale.

Our code base has grown organically and its internal dependencies are very complex. We could have spent a lot of time making it more modular in a way that would be friendly to a source control tool, but there are a number of benefits to using a single repository. Even at our current scale, we often make large changes throughout our code base, and having a single repository is useful for continuous modernization. Splitting it up would make large, atomic refactorings more difficult. On top of that, the idea that the scaling constraints of our source control system should dictate our code structure just doesn't sit well with us.

We realized that we'd have to solve this ourselves. But instead of building a new system from scratch, we decided to take an existing one and make it scale. Our engineers were comfortable with Git and we preferred to stay with a familiar tool, so we took a long, hard look at improving it to work at scale. After much deliberation, we concluded that Git's internals would be difficult to work with for an ambitious scaling project.

Instead, we chose to improve Mercurial. Mercurial is a distributed source control system similar to Git, with many equivalent features. Importantly, it's written mostly in clean, modular Python (with some native code for hot paths), making it deeply extensible. Just as importantly, the Mercurial developer community is actively helping us address our scaling problems by reviewing our patches and keeping our scale in mind when designing new features.

When we first started working on Mercurial, we found that it was slower than Git in several notable areas. To narrow this performance gap, we've contributed over 500 patches to Mercurial over the last year and a half. These range from new graph algorithms to rewrites of tight loops in native code. These helped, but we also wanted to make more fundamental changes to address the problem of scale.

Speeding up file status operations

For a repository as large as ours, a major bottleneck is simply finding out what files have changed. Git examines every file and naturally becomes slower and slower as the number of files increases, while Perforce "cheats" by forcing users to tell it which files they are going to edit. The Git approach doesn't scale, and the Perforce approach isn't friendly.

We solved this by monitoring the file system for changes. This has been tried before, even for Mercurial, but making it work reliably is surprisingly challenging. We decided to query our build system's file monitor, Watchman, to see which files have changed. Mercurial's design made integrating with Watchman straightforward, but we expected Watchman to have bugs, so we developed a strategy to address them safely.

Through heavy stress testing and internal dogfooding, we identified and fixed many of the issues and race conditions that are common in file system monitoring. In particular, we ran a beta test on all our engineers' machines, comparing Watchman's answers for real user queries with the actual file system results and logging any differences. After a couple months of monitoring and fixing discrepancies in usage, we got the rate low enough that we were comfortable enabling Watchman by default for our engineers.

For our repository, enabling Watchman integration has made Mercurial's status command more than 5x faster than Git's status command. Other commands that look for changed files--like diff, update, and commit—also became faster.

Working with large histories

The rate of commits and the sheer size of our history also pose challenges. We have thousands of commits being made every day, and as the repository gets larger, it becomes increasingly painful to clone and pull all of it. Centralized source control systems like Subversion avoid this by only checking out a single commit, leaving all of the history on the server. This saves space on the client but leaves you unable to work if the server goes down. More recent distributed source control systems, like Git and Mercurial, copy all of the history to the client which takes more time and space, but allows you to browse and commit entirely locally. We wanted a happy medium between the speed and space of a centralized system and the robustness and flexibility of a distributed one.

Improving clone and pull

Normally when you run a pull, Mercurial figures out what has changed on the server since the last pull and downloads any new commit metadata and file contents. With tens of thousands of files changing every day, downloading all of this history to the client every day is slow. To solve this problem we created the remotefilelog extension for Mercurial. This extension changes the clone and pull commands to download only the commit metadata, while omitting all file changes that account for the bulk of the download. When a user performs an operation that needs the contents of files (such as checkout), we download the file contents on demand using Facebook's existing memcache infrastructure. This allows clone and pull to be fast no matter how much history has changed, while only adding a slight overhead to checkout.

But what if the central Mercurial server goes down? A big benefit of distributed source control is the ability to work without interacting with the server. The remotefilelog extension intelligently caches the file revisions needed for your local commits so you can checkout, rebase, and commit to any of your existing bookmarks without needing to access the server. Since we still download all of the commit metadata, operations that don't require file contents (such as log) are completely local as well. Lastly, we use Facebook's memcache infrastructure as a caching layer in front of the central Mercurial server, so that even if the central repository goes down, memcache will continue to serve many of the file content requests.

This type of setup is of course not for everyone—it's optimized for work environments that have a reliable Mercurial server and that are always connected to a fast, low-latency network. For work environments that don't have a fast, reliable Internet connection, this extension could result in Mercurial commands being slow and failing unexpectedly when the server is congested or unreachable.

Clone and pull performance gains

Enabling the remotefilelog extension for employees at Facebook has made Mercurial clones and pulls 10x faster, bringing them down from minutes to seconds. In addition, because of the way remotefilelog stores its local data on disk, large rebases are 2x faster. When compared with our previous Git infrastructure, the numbers remain impressive. Achieving these types of performance gains through extensions is one of the big reasons we chose Mercurial.

Finally, the remotefilelog extension allowed us to shift most of the request traffic to memcache, which reduced the Mercurial server's network load by more than 10x. This will make it easier for our Mercurial infrastructure to keep scaling to meet growing demand.

How it works

Mercurial has several nice abstractions that made this extension possible. The most notable is the filelog class. The filelog is a data structure for representing every revision of a particular file. Each version of a file is identified by a unique hash. Given a hash, the filelog can reconstruct the requested version of a file. The remotefilelog extension replaces the filelog with an alternative implementation that has the same interface. It accepts a hash, but instead of reconstructing the version of the file from local data, it fetches that version from either a local cache or the remote server. When we need to request a large number of files from the server, we do it in large batches to avoid the overhead of many requests.

Open Source

Together, the hgwatchman and remotefilelog extensions have improved source control performance for our developers, allowing them to spend more time getting stuff done instead of waiting for their tools. If you have a large deployment of a distributed revision control system, we encourage you to take a look at them. They've made a difference for our developers, and we hope they will prove valuable to yours, too.

Faster, More Awesome GitHub Pages · GitHub

$
0
0

Comments:"Faster, More Awesome GitHub Pages · GitHub"

URL:https://github.com/blog/1715-faster-more-awesome-github-pages


We just rolled out some big improvements to GitHub Pages. Now, when someone visits a Pages site, rather than GitHub serving the content directly, the page is served by a global Content Delivery Network, ensuring that the nearest physical server can serve up a cached page at blazingly fast speeds. As an added bonus, we can now protect your GitHub Pages site with the same kind of Denial of Service mitigation services used for GitHub.com.

If you are using a subdomain, custom subdomain, or an A record with GitHub Pages you may need to make some changes.

Default User Domain - username.github.io

Default subdomains are automatically updated by our DNS, so we've got you covered.

Custom Subdomain (www.example.com) - with CNAME

If you are using a custom subdomain (like www.example.com), you should use a CNAME record pointed at username.github.io as described in our help document.

Apex domain (example.com) - with ALIAS or A

If you currently use an A record, you can tell if you need to move if your A record is pointed to 207.97.227.245 or 204.232.175.78. You can check using:

$ dig example.com
example.com. 7200 IN A 207.97.227.245

OR

$ dig example.com
example.com. 7200 IN A 204.232.175.78

Some DNS providers (like DNSimple) allow you to use an ALIAS record to point your custom apex domain to username.github.io. If your DNS provider supports this configuration, it will allow us to provide the full benefits of our Content Delivery Network and Denial of Service protection to your Pages site.

If you switch to a subdomain or switch to a DNS provider that supports ALIAS records, you can take advantage of the Content Delivery Network and Denial of Service mitigation.

If you are using an apex domain (e.g. example.com) instead of a subdomain (e.g. www.example.com) and your DNS provider does not support ALIAS records then your only option is to use A records for your DNS. This configuration will not give your Page the benefit our Content Delivery Network but you will still be protected by our Denial of Service mitigation. Configure your A or ALIAS records as described in our help document.

Happy (and faster) publishing!

Need help or found a bug? Contact us.

Photosynth - Capture your world in 3D.

Viewing all 9433 articles
Browse latest View live