Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

How the Harper Government Committed a Knowledge Massacre | Capt. Trevor Greene

$
0
0

Comments:"How the Harper Government Committed a Knowledge Massacre | Capt. Trevor Greene"

URL:http://www.huffingtonpost.ca/capt-trevor-greene/science-cuts-canada_b_4534729.html


Scientists are calling it "libricide." Seven of the nine world-famous Department of Fisheries and Oceans [DFO] libraries were closed by autumn 2013, ostensibly to digitize the materials and reduce costs. But sources told the independent Tyee in December that a fraction of the 600,000-volume collection had been digitized. And, a secret federal document notes that a paltry $443,000 a year will be saved. The massacre was done quickly, with no record keeping and no attempt to preserve the material in universities. Scientists said precious collections were consigned to dumpsters, were burned or went to landfills.

Probably the most famous facility to get the axe is the library of the venerable St. Andrews Biological Station in St. Andrews, New Brunswick, which environmental scientist Rachel Carson used extensively to research her seminal book on toxins, Silent Spring. The government just spent millions modernizing the facility.

Also closed were the Freshwater Institute library in Winnipeg and the Northwest Atlantic Fisheries Centre in St. John's, Newfoundland, both world-class collections. Hundreds of years of carefully compiled research into aquatic systems, fish stocks and fisheries from the 1800s and early 1900s went into the bin or up in smoke.

Irreplaceable documents like the 50 volumes produced by the H.M.S. Challenger expedition of the late 1800s that discovered thousands of new sea creatures, are now moldering in landfills.

Renowned Dalhousie University biologist Jeff Hutchings calls the closures "an assault on civil society."

"It is always unnerving from a research and scientist perspective to watch a government undermine basic research. Losing libraries is not a neutral act," Hutchings says. He blames political convictions for the knowledge massacre.

"It must be about ideology. Nothing else fits," said Hutchings. "What that ideology is, is not clear. Does it reflect that part of the Harper government that doesn't think government should be involved in the very things that affect our lives? Or is it that the role of government is not to collect books or fund science?" Hutchings said the closures fit into a larger pattern of "fear and insecurity" within the Harper government, "about how to deal with science and knowledge."

Many scientists have compared the war on environmental science to the rise of fascism in 1930s Europe. Hutchings muses, "you look at the rise of certain political parties in the 1930s and have to ask how could that happen and how did they adopt such extreme ideologies so quickly, and how could that happen in a democracy today?"

ALSO ON HUFFPOST:

  • Cuts To Science In Canada

    A selection of programs and research facilities being closed, downsized or in jeopardy due to federal funding cuts or policy changes made by the Conservative government.

  • Advanced Laser Light Source Project (Varennes, Quebec)

    May be forced to close in 2014 if new funding isn't secured due to moratorium on the Major Resources Support Program (MRS) at Natural Sciences and Engineering Research Council of Canada (NSERC). Several of the following MRS cuts are detailed in a <a href="http://kennedystewart.ndp.ca/sites/default/files/kennedystewart.ndp.ca/field_attached_files/mrs_program_moratorium_impact_report_0.pdf" target="_blank">report by the office of NDP MP Kennedy Stewart</a>, opposition critic for science and technology.

  • Bamfield Marine Sciences Centre (Bamfield, B.C.)

    Losing a third of his research budget, worth about $500,000 a year. The money runs out April 1, 2014 due to MRS moratorium at NSERC.

  • Canadian Coast Guard Ship Amundsen Research Cuts

    Canada’s only icebreaker dedicated to research has received $2.8 million in total MRS funding. Moratorium on MRS will result in far less research and higher costs to charter; loss of four technicians out of six.

  • Experimental Lakes Area (Kenora District, Ontario)

    The government announced the closure of the Experimental Lakes Area run by Fisheries and Oceans Canada in northwestern Ontario. The cuts will save it about $2 million a year — although <a href="http://www.huffingtonpost.ca/2013/03/19/experimental-lakes-area-tories-scientists_n_2910022.html" target="_blank">sources told The Canadian Press</a> the actual operating cost of the facility is about $600,000 annually, of which a third comes back in user fees. (The Ontario government, working with Ottawa, Manitoba and others,<a href="http://www.huffingtonpost.ca/2013/04/24/ontario-ela-open-for-year_n_3146662.html" target="_blank"> announced April 24 that it would help keep ELA open</a>). The facility, an outdoor laboratory consisting of 58 lakes, laboratories and living quarters, has been in operation since 1968 and is credited with helping solve North America’s acid rain problem in the 1970s and 1980s among other breakthroughs in areas of toxic contaminants, algae and flooding by reservoirs.

  • Canadian Neutron Beam Centre, Chalk River, Ont.

    $1.27-million shortfall due MRS moratorium. Training for users and students will be scaled back significantly.

  • IsoTrace AMS facility (University of Ottawa, Ontario)

    High precision measurement of radiocarbon and other trace radionuclides for geological dating and tracing in the earth and environmental sciences. Operation in jeopardy. The facility recently received $16 million in funding from the Ontario government and Canadian Foundation for Innovation to set up new geoscience labs at the University of Ottawa. It was counting on $125,000 per year from MRS to maintain operations. That funding was to increase with new facilities. "It is shameful that our main funding organization for the sciences has decided that it should withdraw from supporting solid empirical research through funding laboratories," a spokesperson said.

  • Kluane Lake Research Centre, Yukon

    The Kluane Lake facility, one of Canada's oldest research facilities, lost $106,000 due to MRS cuts. The facility is run by the Arctic Institute of North America, a joint U.S.-Canada research operation that is administered by the University of Calgary along with the University of Alaska, <a href="http://www.cbc.ca/news/canada/story/2012/07/10/f-kluane-glacier-research.html" target="_blank">CBC reports</a>.

  • Canadian Foundation for Climate and Atmospheric Sciences, Ottawa

    Launched by the Liberal government under Jean Chrétien in 2000, the foundation awarded more than $100 million in grants for university-led research. In 2011, the federal government’s first omnibus budget bill killed the foundation. At the time, the government said it would replace some of the funds with $35 million to be distributed through the Natural Sciences and Engineering Research Council (NSERC) over five years for all climate research activities.

  • Polar Environment Atmospheric Research Laboratory (Nunavut)

    Located on Ellesmere Island near Eureka, Nunavut, it is one of the most remote weather stations in the world and does key research on climate change, ozone and air quality. Closed after it lost $1.5 million in annual funding due to the closure of the Canadian Foundation for Climate and Atmospheric Sciences.

  • The Canadian Centre for Isotopic Microanalysis (University of Alberta, Edmonton)

    MRS moratorium means the centre no longer has an open door policy for Canadian researchers or a special reduced NSERC rate for research conducted by Canadians in the labs. "The long-term prognosis for the geochronology labs is not good," a spokesperson said.

  • The National High Field Nuclear Magnetic Resonance Centre (Edmonton, Alberta)

    Program in jeopardy due to MRS moratorium, <a href="http://kennedystewart.ndp.ca/sites/default/files/kennedystewart.ndp.ca/field_attached_files/mrs_program_moratorium_impact_report_0.pdf" target="_blank">according to the NDP</a>.

  • The National Ultrahigh-Field NMR Facility for Solids (University of Ottawa)

    The facility will close without MRS funding, leaving $10 million in capital equipment idle, including the only Canadian-based 900 MHz Bruker Nuclear magnetic resonance (NMR) spectrometer,<a href="http://kennedystewart.ndp.ca/sites/default/files/kennedystewart ndp.ca/field_attached_files/mrs_program_moratorium_impact_report 0.pdf" target="_blank"> according to the NDP</a>.

  • Office of the National Science Adviser

    The office, created in 2004 by the Liberal government of Paul Martin and led by Arthur Carty, pictured, was intended to provide independent expert advice to the prime minister on matters of national policy related to science, ranging from nanotechnology, high energy particle physics and ocean technologies to climate change and the environment. The Harper government closed the office in 2008.

  • National Round Table on the Environment and the Economy

    Funding for the arm's length, independent advisory group was cut in the 2011 budget and the group wound down in 2012. Since 1988, it had been producing research on how business and government policies can work together for sustainable development — including the idea of introducing carbon taxes. The <a href="http://www.huffingtonpost.ca/2012/05/14/national-round table-on-the-environment-and-the-economy-funding_n_1516240.html" target="_blank">Tories confirmed they cut funding because of the group's focus on carbon taxes</a>.

 


Happy to help :) hire me ;)

$
0
0

Comments:"Happy to help :) hire me ;)"

URL:http://mibake.co


Hi there!

My name is Michael Baker. I'm a freelance PHP developer and Front-End Engineer. My specialty is converting PSDs into responsive HTML5, CSS and Javascript. I mostly work with creative agencies (because they're awesome).

This is the kind of work I do:

Currently I am looking for immediate work to finance a plane ticket to see my girlfriend, my love. We've been in a long distance relationship and I can no longer bear to be apart from her. If you've ever been in a long distance relationship, you'll know that it's not sustainable. So please, hire me. I'll give you excellent value and your paid invoice will give me one of the greatest gifts in the world.

My standard rate is $65/hr, but right now, rates are completely negotiable! This is my goal:

Please email me or ping me on Skype!
michael@rcodeteam.com
Skype ID: cleverbaker
@mibake

Samples of my work are available on request. Just ask!

Bitcoin 1NyuYvfbP513FowyPFCNL77rsLmqap3mq3

This minimalist site was bought, written, and launched in under an hour!

ahmetalpbalkan/go-linq · GitHub

$
0
0

Comments:"ahmetalpbalkan/go-linq · GitHub"

URL:https://github.com/ahmetalpbalkan/go-linq


A powerful language integrated query library for Go. Inspired by Microsoft'sLINQ.

  • No dependencies: written in vanilla Go!
  • Tested: 100.0% code coverage on all stable releases.
  • Backwards compatibility: Your integration with the library will not be broken except major releases.

Installation

$ go get github.com/ahmetalpbalkan/go-linq

Quick Start

Example query: Find names of students over 18:

import."github.com/ahmetalpbalkan/go-linq"typeStudentstruct{id,ageintnamestring}over18Names,err:=From(students).Where(func(sT)(bool,error){returns.(*Student).age>=18,nil}).Select(func(sT)(T,error){returns.(*Student).name,nil}).Results()

Documentation

Here is wiki:

Install & Import Quickstart (Crash Course) Parallel LINQ Table of Query Functions Remarks & Notes FAQ

License

This software is distributed under Apache 2.0 License (see LICENSE for more).

Disclaimer

As noted in LICENSE, this library is distributed on an "as is" basis and author's employment association with Microsoft does not imply any sort of warranty or official representation the company. This is purely a personal sid project developed on spare times.

Authors

Ahmet Alp Balkan@ahmetalpbalkan

Release Notes

v0.9-rc3
* removed FirstOrNil, LastOrNil, ElementAtOrNil methods 
v0.9-rc2.5
* slice-accepting methods accept slices of any type with reflections
v0.9-rc2
* parallel linq (plinq) implemented
* Queryable separated into Query & ParallelQuery
* fixed early termination for All
v0.9-rc1
* many linq methods are implemented
* methods have error handling support
* type assertion limitations are unresolved
* travis-ci.org build integrated
* open sourced on github, master & dev branches

Evernote, the bug-ridden elephant | jasonkincaid.net

$
0
0

Comments:"Evernote, the bug-ridden elephant | jasonkincaid.net"

URL:http://jasonkincaid.net/2014/01/evernote-the-bug-ridden-elephant/


To say this post pains me would be an understatement. More than any other technology, Evernote is part of me, having evolved from habit to instinct over several years and nearly seven thousand notes. Every day ideas flit through my head, ideas for essays, for characters, for jokes. Just now I catch a glimpse of one, without thinking I am talking into my phone like a Star Trek Communicator, telling myself that maybe I should title this post Leaky Sync. Maybe not.

Because I use it so often, I am unusually familiar with the service’s warts. Evernote’s applications are glitchy to the extreme; they often feel as if they’re held together by the engineering equivalent of duct tape. Browser extensions crash, text cursors leap haphazardly across the screen — my copy of Evernote’s image editor Skitch silently failed to sync for months because I hadn’t updated to the new version. Most issues are benign enough, but the apps are so laden with quirks that I’ve long held a deep-seated fear that perhaps some of my data has not been saved, that through a syncing error, an accidental overwrite — some of these ideas have been forgotten.

As of last month, I am all but sure of it.

I’ve been learning how to write songs. It’s terrifying because I stink, so I trick myself, diddling around without actually intending to record anything. With any luck I reach a fugue state, vaguely listening for my fingers to do something interesting; sometimes instinct steers me toward the green elephant’s ‘record’ button and I play for a while.

And so I find myself on December 5, when a meandering session results in an 18 minute Evernote audio recording on my iPhone labeled “not bad halfway through” — high praise, for me. Some of the chord changes are sheer luck, no idea what I did but they sounded good the first time.

I decide to give it another listen with more discerning ears, self-loathing eagerly waiting in the wings.

And — nothing. Zero seconds out of zero seconds. It’s a blank file.

Alarmed, I tap record again, make another note. It won’t play, either.

Another. This one works.

One more. Zero out of zero.

I check the Wifi signal (fine). I let the phone sit for a while to sync, just in case. I head to the web app, which — thankfully — shows the note intact, with its attachment as an 8.7 megabyte .m4a file.

I try to open it in iTunes — it shrugs. Quicktime spits an error. Time to bust out the big guns. VLC.

Nada.

Teeth grinding, I contact Evernote support. The process is slow and bumbling, but I’d like to think this has more to do with Evernote’s overly-structured ticket system than the people working there. Unfortunately, in the process of trying to learn what happened to my audio file, I discover another flaw in Evernote’s system.

As an apparently standard part of Evernote’s support process, it requests that users send over an Activity Log. This is a file generated by each Evernote application that records the myriad housekeeping events going on behind the scenes — ”Sending preference changes…”, and so on.

For most services this log wouldn’t make me bat an eye, but in many ways my Evernote archive is more sensitive than my Gmail account. With email, there’s always the possibility that the guy on the other end will forward the message along, so I tend to behave accordingly. With Evernote it’s just me. I try not to filter myself because that’s how creativity dies.

I ask the support person to verify that he will not have access to my data. No, he assures me. Just the meta data, like note titles (why Evernote doesn’t believe note titles are potentially sensitive is beyond me, but, in my case, they’re usually blank anyway).

Still, out of habitual paranoia, I skim through the log before sending. Thousands of lines of gibberish, dates and upload counts and [ENSyncEngine] INFO: Sending search changes.

And then I come across something more legible. It’s a text note I left a few evenings ago, a stray thought about sex, if I’m being honest. Further down, another note, the entire contents of the text, broken up by some HTML tags. And another.

Turns out there’s a bug, this time compliments of Evernote for Mac’s ‘helper’ — an official mini app that’s meant for jotting down notes without having to switch to the hulking beast that is the desktop application. On my Macbook Pro, running the latest version of Evernote for Mac, this ‘helper’ app records the entirety of any text it saves into the log file.

Alarmed and not a little bit furious that I nearly sent him some deeply embarrassing musings, I tell the support person about the issue, noting that it is a serious breach of privacy (and an obvious one, given that I noticed it in all of ten seconds).

They say to file another ticket.

As for the audio file: even more bad news.

It’s been nearly a month and the most substantive thing Evernote has said is that it is “seeing multiple users who have created audio notes of all sizes where they will not play on any platform.” The company has given me no information on what’s wrong with the corrupted file, and no indication that they might find a way to get it working in the future.

Adding further insult, the up-to-date iOS application continues to create corrupted audio notes, despite receiving an update on December 17, twelve days after I reported the issue. The support team actually couldn’t tell me whether that update addressed the audio problem — they said I should check the App Store release notes, which routinely includes the ambiguous line “bug fixes”, so I had to figure it out for myself. Two more corrupted notes later, I can say with some authority that it’s still there (I’ve also encountered a new issue, where some audio files simply vanish).

Through it all, the support team has displayed a marked lack of urgency that has bordered on nonchalance. Maybe they’re trained that way, or maybe data loss on Evernote isn’t as rare as I’d hope.

None of this has been life shattering, but given how reliant I am on Evernote it is deeply unnerving — now each note I instinctively leave myself is tinged with anxiety. I’m concerned that as I dig through my Evernote archive I’ll encounter more corrupted audio notes, and, worse, my paranoia is increasingly convinced that there may have been notes that never were saved to the archive at all.

More than that, I am alarmed that Evernote seems to be playing fast and loose with the data entrusted to it. Instead of building a product that is secure, reliable, and fast, it has spread itself too thin, trying to build out its install base across as many platforms as possible in an attempt to fend off its inevitable competition.

This strategy is tolerable for a social network or messaging app (Facebook got away with atrociously buggy apps for years). But Evernote is literally aiming to be an extension of your brain, the place to store your most important ideas. Its slogan is “Remember Everything”. Presumably the integrity of its data should be of the utmost importance.

What’s worse, it isn’t consistently improving. When iOS7 launched, Evernote was one of the first applications to overhaul with a new, ‘flat’ design, and as a result benefitted from being featured prominently within the App Store. But functionally, it was clearly a downgrade from the old app, with extra dollops of sluggishness, crashes, and glitches — it may well have introduced the audio recording bug I fell prey to (I believe it dates back to at least October, when I encountered a similar audio issue that I chalked up to user error).

Evernote’s security track record has been similarly frustrating. Asked in October 2012 why the service had not implemented the increasingly-common two-factor authentication option already offered by companies like Google, Evernote’s CEO, Phil Libin, wrote“Finding an approach that gives you increased security without making Evernote harder to use is not just a matter of adding two-factor authentication…”, implying that something better was on the way.

Five months later the promised security upgrade was still MIA — until Evernote was hacked, its database of user passwords was compromised, and the service rushed to implement a two-factor system that didn’t look much different from the sort Libin was apparently aiming to leapfrog.

This is a company with over $250 million in funding and 80 million users. And unlike many web services that promise exhaustive security and reliability, it’s one I actually pay for.

Ironically, the same day I was told Evernote didn’t have a fix for my corrupted music recording, the New York Times published an article about Evernote titled, An App That Will Never Forget a File.

Update: Evernote CEO Phil Libin contacted me and we spoke about the issues described. He apologized, saying the post rings true and that there is a lot of work to be done both on the application and service fronts. In the short-term the company will be implementing fixes for the issues above, with plans to focus on general quality improvements in the months ahead.

JLOUIS Ramblings: Haskell vs. Erlang for bittorent clients

[Feature #9362] Minimize cache misshit to gain optimal speed by shyouhei · Pull Request #495 · ruby/ruby · GitHub

I Got Fired Last Week. That’s a Good Thing. Here’s Why. | Alex Gives Up

$
0
0

Comments:"I Got Fired Last Week. That’s a Good Thing. Here’s Why. | Alex Gives Up"

URL:http://alexgivesup.com/2013/12/23/i-got-fired-last-week-thats-a-good-thing-heres-why/


“We’d like to help you transition out of the company.”

As the conversation progressed and reality sunk in, my ears slammed shut and blood streamed  to my head. And since I neglected to bring a jacket for what I thought was a routine Friday morning coffee meeting with my CEO, it was cold and now I was shivering like a dumbass.

I joined the company eight months ago when it was just three guys with laptops, and I’d watched it successfully launch, raise $3.7M in funding, and expand from 3 employees to 13—three of which I’d recruited and hired. I was proud of my first 8 months at work. I ran a viral email campaign that signed up a person a minute for the week preceding launch, and then generated a firestorm of media coverage when the product opened for business. Shit, I had just released a new version of our website two days prior that improved on-site conversions by 400%.

What happened?

It’s painfully simple. I excelled at the company’s growth stage because I had a ton of hustle, a lightening fast ability to learn, and the entrepreneurial wherewithal to juggle 30 skills at once. Now, the company had blossomed, hired a new VP of Marketing with twenty-five years experience, and had reached a point where it “needed specialists instead of generalists.”

The irony did not escape me: as the company’s Director of Growth, I grew the company to the point where it had outgrown me. Twelve days before Christmas.

The walk back to my apartment was long. I called my parents in tears and relived every misstep, looking entirely out of place in the midst of the morning hustle and bustle. When I got back to my apartment at the ripe hour of 9:30am, I strongly considered draining a bottle of whiskey on my balcony and blacking out before noon. But after a hefty lunch, where I specifically asked for a dinner plate “with extra gravy,” that plan changed. Dramatically.

Here I am one week later. I’ve had a dozen interviews, a job offer, and am now actively turning down work instead of looking for it.

It’s possible that getting shit-canned was the best thing to happen to me in 2013. Here’s why:

1. Getting fired lights a fire

This is not the first time I’ve been fired. It’s the second. After the first, I fulfilled a lifelong dream to start my own company (it also inadvertently inspired another epic undertaking). This was not a coincidence. Every late night was fueled by a frenetic energy to prove those doubters wrong. I wanted them to view letting me go as the biggest mistake in their company’s history. It wasn’t of course—not even close—but after every personal victory, I still felt like Reggie Miller raining 3’s in Spike Lee’s weasely face.

I would go so far as encouraging everyone to get their ass handed to them along with an Employee Termination Letter at least once in their life. It’s an unforgettable feeling, and getting kicked in the gut by the unforgiving boot of unemployment is a beautiful thing. As long as you have the resilience to counter it with a roundhouse kick to the face.

2. I learned to appreciate my friends

You remember friends, right? Those things you pushed aside in favor of late work nights? You know, something other than your laptop’s blueish hue? I thought I did too—but after getting canned, that view changed.

After that fateful Friday lunch, I immediately started calling friends. Close friends. Old friends. New friends. Friends in high places. Friends in low places. I talked to over 40 people in 4 days. So many were unbelievably willing to help. They readily dispensed advice, made intros, and lent sympathetic ears. It was tremendously humbling

Those friends knew friends—CEOs, recruiters, employers, and more. That led to job opportunities, which led to interviews, which led to offers. I knew this intuitively, but it’s true: jobs come from people. Not the internet. Not job boards. And if nothing else, when’s the last time you grabbed a consolatory beer with a Craigslist post?

3. I’m a jack of all trades, but a master of none. 

This was a tough one to swallow. But look at the facts: I’m a math major who writes in his spare time for fuck’s sake—I wear more hats than a balding magician. Although I’m very good at a dozen different things, I’m an expert in none of them. And that’s dangerous.

Yes, employees are greater than the sum of their skills, and most organizations—especially small ones—need people who can fill the roles of 2 or 3 people. But get this through your head: if you’re not the best at something, you’re replaceable.

This was brutally true for me. Other than being an affable goofball, there was not one thing I was best at in this last company. Our designer is a better designer. Our engineers are better coders. Our CEO is a better marketer. Our Chief of Staff is a better leader. Yes, I was very good at those things, but was I the best at any one of them? No. Painful, but true.

In other words: I was expendable. That phrase “we need specialists instead of generalists,” already haunts me. It will also be the last time I hear those words. Think I’m going to become a master in my next job? Yep. Better fucking believe it.

4. The grass is greener, goddamnit!

I can’t tell you the number of times I heard some variant of that phrase in the last week. “Something bigger and better is out there waiting for you.” “These things happen for a reason.” “You’ll find something even more exciting.” At first, I wrote it off as conciliatory bullshit. Those pearls of advice are so hackneyed that my stomach acid swirled at every utterance.

But when I took stock of my life and reflected on every failure, there’s a pattern: I’ve rebounded like Dennis Rodman on amphetamines. That failed Chemistry class? Highest GPA next quarter. Fired from my first job? Started my own company. Lost control of that company? Life-changing three month journey through Europe.

So contrary to my cliché aversion, I know this time will be no different. The signs are strong. I can’t see it yet, but I can feel the florescent green, Hulk-strength grass ready to shoot through the soil.

Over the last week, I’ve viewed this exit from every angle. Losing your job will facilitate that type of introspection. But one thought has prevailed over all the self-pity, anger, and dejection:

What an amazing Christmas present.

Using Rust for an Undergraduate OS Course

$
0
0

Comments:"Using Rust for an Undergraduate OS Course"

URL:http://rust-class.org/pages/using-rust-for-an-undergraduate-os-course.html


Perhaps the most controversial decision about this course was the choice of Rust, a very new and immature programming language being developed by Mozilla, as the primary language for course assignments. In general, I am very happy with how this worked out, although there were significant drawbacks to using such a new language and student opinions on this were fairly mixed. Next, I'll describe more why I decided to use Rust and what impact it had on the course. (For more on the overall context and design of the class, see the Course Wrapup.)

Unacceptability of C

The default language choice for Operating Systems courses is C. Nearly all (at least 90% from my cursory survey, the remainder using Java; if anyone knows of any others, please post in the comments!) current and recent OS courses at major US universities use C for all programming assignments. C is an obvious choice for teaching an Operating Systems course since is it the main implementation language of the most widely used operating systems today (including Unix and all its derivatives), and is also the language used by nearly every operating systems textbook (with the exception of the Silberschatz, Galvin, and Gagne textbook which does come in a Java version). To paraphrase one syllabus, "We use C, because C is the language for programming operating systems."

C is an obvious choice, but its also an obviously bad choice. C was a great language when it was designed in the late 1960s, to make up for not having a machine powerful enough to run languages like Algol 60 which were already available, but too complex to compile without a huge (by 1960s standards) amount of memory.

An example of a design decision made to save memory in C, is the use of= as the assignment operator (previous languages, such as Algol, used an arrowlike two-character sequence, :=, which makes the directional nature of assignment clear). From kindergarten on, people learn the mathematical meaning of the = symbol as a Boolean operator that is true when the values on both sides match. Using it to mean assignment is confusing and misleading, but was a "good" decision for C since it saved one byte for each assignment in program text. Since assignments are much more common than equality tests, using == for equality and= for assignment saves bytes in source code files, which was a lot more important than providing a coherent model or understandable notation for Dennis Ritchie and Ken Thompson in the 1960s.

Computers have changed a bit since 1972 - our programming tools should too! (We should still study and learn from history, though.)

By the time students make it to cs4414, they have recovered from the atrocities of = meaning assignment, but many others don't make it over this (unnecessary) hurdle and suffer greatly because of it, if they have the misfortune of starting out with a programming language that uses C-like syntax.

More important problems with C for teaching an OS course are its lack of type safety and memory safety, and its lack of any intrinsic support for concurrency. It was the correct decision to not include these things in C for the purpose for which it was designed in the 1960s and 1970s, but there have been a few advances in programming languages, compilers, and architecture since then, and the relative costs of things have changed, as well as the increased importance of security and robustness.

C may be the language of operating systems today (and since the 1970s), but it is also the language of buffer overflow vulnerabilities, dangling pointers, double frees, memory leaks, undefined behavior, and race conditions. To me, it would seem inconscionable to have students use C in a way that is likely to lead to programs riddled with security vulnerabilities and robustness problems. On the other hand, actually teaching students to use C in a responsible way would probably require at least the full semester as well as use of many other tools, including some that are only available commercially.

As far as I could tell, none of the OS courses that use C had any time for teaching students how to write reasonably safe C code (and none of the C-based OS textbooks I looked at made anything beyond superficial attempts to address safe programming practices in C), which is understandable given how much time it takes to learn the other topics in an OS class, but from my perspective, seems reckless and irresponsible. A C programmer who knows how to hack on an OS kernel, but doesn't know how to avoid memory leaks, double frees, and undefined behavior, should not be unleashed on an unsuspecting code base!

Having eliminated C as an irresponsible choice for teaching an OS course, I had to select another language. (I briefly considered not using a particular language, but since it seemed important to provide starting code to have interesting assignments and to cover some things in class using a particular language, this didn't seem like a viable option to me.)

I considered five possibilities: Java, Python, D, Go, and Rust.

Java. Okay, I didn't really consider Java, since I personally find it to be unpleasant to program in Java, but need to mention it since it is the only commonly used alternative to C for OS courses (some examples: University of Massachusetts CMPSCI 377: Operating Systems, University of Texas: CS 372: Operating Systems). The advantage of using Java over C is that it provides type safety and automatic memory management, so does not suffer from most of the security and robustness problems in typical C code. It does not, however, provide enough low-level access to system resources to adequately cover many important operating systems concepts, and is not a language anyone would use to write an OS kernel. Java also suffers from lack of useful expressiveness, only finally and awkwardly adding lambda expressions in Java 8 (expected to be released in March 2014, neary 19 years after the initial Java release!)

Python. Unlike Java, Python is a quite tasty and elegant language, and has the advantage that many complex algorithms can be expressed clearly in a few lines of Python code. When I was on leave at Udacity, I'd had some discussions with Matt Welsh about teaching an open on-line systems programming course using Python, and he came up with a plan for teaching many of the concepts needed to build scalable, robust, distributed systems using Python. It seems possible to do a good job teaching some of the topics in an OS course using Python labs, including distributed systems and scheduling, but, like Java, not well-suited to exploring lower-level topics like memory management or a language one could imaging using to implement an OS kernel.

D. D is a programming language designed as a successor to C++ that has been around since 2001. D has some very nice features: it provides reasonable support for functional programming, static types with type inference, and some concurrency constructs. The biggest disadvantage of D compared to Rust is that it does not have the kind of safety perspective that Rust does, and in particular does not provide safe constructs for concurrency. I prefer Rust's memory management options, although D provides a fairly similar combination of garbage collected objects and objects with explicit memory management, it does not provide any of the safey guarantees or lifetime model that Rust does. The other argument against using D is that it has been around more than 10 years now, without much adoption and appears to be more likely on its way out rather than increasing popularity. This shouldn't be a major factor for an academic course, but I'd much rather be teaching something that is in an early phase of exponential popularity growth rather than something that appears to be declining.

Go. Go is a systems programming language first released in 2007 that was initially designed by a team including Ken Thompson (C co-designer) and Rob Pike at Google. Unlike D, it seems to have a lot of momentum behind it, no doubt partly due to Google's support. There are some preliminary, although tastelessly named, attempts to write operating systems in Go, but the intertwining of the runtime with the compiler makes this quite difficult in Go (in comparison to Rust where the runtime can be easily separated). I'm not aware of any undergraduate operating systems courses that have used Go, but MIT's graduate 6.824: Distributed Systems course and CMU's 15-440: Distributed Systems course used Go. Go is a fairly new language, but mature enough to have a stable, industrial-strength compiler with great performance, and good documentation (and mature enough to have an idiom guide). Go is also similar enough to C to be easy for C programmers to learn and has no major learning curve hurdles.

So, Go definitely has a lot of things going for it, and initially I planned to use Go for cs4414 until learning aboutzero.rs. After looking into Rust more, it seemed like Rust had several key advantages both as a systems programming language in general, and as a primary language for teaching an operating systems course.

Reasons to Use Rust

Rust is a new systems programming language being developed at Mozilla. It is primarily driven by the needs of the Mozilla'sServo project to build an experimental multi-core browser engine. Rust adopts a C-like syntax, but is much less tied to C's semantics or being easy for C programmers to learn than Go or D, and its overriding design goal is to support safe concurrency in a practical way. The first public release was in January 2012, so Rust is a very new and immature language, but it has some innovative and very appealing features for use in systems programming and teaching an operating systems course.

Rust changes the way you think. At least for "ivory-tower" types, the only reason to learn new languages is for their power to change the way you think. Neither Go nor D really do this (at least for anyone who has previously programmed in C, Java, and Python), but Rust does a great deal. Rust requires programmers to think carefully about how objects are shared and used, and this changes the way you think about programs and algorithms. Rust provides simple and safe concurrency mechanisms, and a useful and intellectually interesting task abstraction. I'm not aware of any other language that has concurrency constructs as elegant and easy to use as Rust's spawn, and I don't know of any language that comes close to the race-free safety guarantees provided by Rust. The alternative with language like C and Go, is to have students get in the habit of writing programs riddled with race condition bugs. Such programs tend to work well enough to be satisfactory for course projects (where the occasional crash or corrupted data is acceptable), but are increasingly unacceptable in the real world. The next step is to use available tools to detect race conditions (e.g., Go's race detector), but these present another tool to use and learn about, are incomplete (only detect races that occur on executions), and much less satisfying than the static, language-based mechanisms provided by Rust. Rust's task abstraction is somewhere between a thread and process: it provides the memory isolation safety of a process while allowing safe memory sharing, but with the lightweight costs of a thread requiring no context switching. This is a useful thing for programmers, but also a great thing for students to learn about and understand in an operating systems course. Along with the task abstract, Rust provides good abstractions for communication between tasks including a simple channel abstraction and abstractions for shared mutable state. Rust provides strong memory safety guarantees, but still allows programmers explicit control over memory management. The main alternatives provide two extremes: C provides no memory safety but fully explicit control over memory management; Go and D provide memory safety but with all objects being automatically managed with a garbage collector (over which languages users have little control). Rust provides a way for programmers to declare objects that are automatically managed or explicitly managed, and statically checks that explicitly managed objects are used safely. This is done using a notion of ownership, and type rules for controlling how ownership is transferred between references (e.g., you can pass and owned object temporarily to a function as a borrowed reference, and can have more complex ownership types to enable controlled sharing). These rules guarantee all memory is safely deallocated, and the sharing restrictions are essential for providing safe concurrency. Rust provides good mechanisms for higher-order procedures, and the many parts of the library encourage students to use them. One of the biggest embarrassments about our department's standard curriculum is that students can reach a 4000-level class without ever encountering a higher-order procedure (as opposed to the sadly rarely-offered alternative intro course which introduces higher-order procedures in the first assignment)), so I thought it was essential to use a language that supports higher-order procedures and provides good opportunities to use them effectively. Rust provides a simple syntax for lambda expressions, and lots of good ways of using them including in the RWArc abstractions we used in the starting code for PS3. With it strong emphaisis on safety, Rust still provides an escape hatch to allow students to experiment with unsafe code. The Rust compiler normally disallows any code with race conditions or unsafe memory use, but the language provides the unsafe construct for surrounding code that may be unsafe. This can certainly be abused (and I regret not providing some guidelines or rules to prevent students from overusing it, which some students did on some assignments), but is also very useful for being able to show simple but unsafe code and for understanding what is necessary to make code safe. For example, we used this in PS1 where students added an unsafe counter to a simple web server, and then in PS3 there was a problem where students had to implement it in a safe (race-condition free) way. The unsafe escape hatch also makes it possible to include assembly code in Rust programs, and to easily use libc. Rust is open source and has an open development process. The Rust source code is available for everyone to read, and the Rust language designers discussions are on an open mailing list. (Go is also open source and has an open developer discussion, so this is not a particular advantage of Rust over Go.) Rust has a vibrant, helpful, and friendly community. This is obviously subjective, but the novelty and philosophy of Rust are well-suited to a devoted development team and user base, and I got a good sense that the community would be very helpful in my early interactions on the Rust IRC. For an immature language, having a supportive community is really important, and our class benefitted greatly from help from the Rust community throughout the course. The immaturity of Rust means there are lots of opportunities to make contributions. Rust's immaturity is its biggest drawback (more in the next section), but it also has a good side. Using an immature language means there are lots of opportunities to make substantial contributions, and I tried to encourage students to view anything essential lacking from the Rust ecosystem as an opportunity for them to create it. By the end of the course, one of the students had made changes that were accepted into the main Rust compiler, other group had built a regular expression library, true random number generator, and real-time audio, amongh other projects; and several students are now working on writing tutorials and documentation that will be useful for students in next semester's class and anyone else who wants to learn Rust. It is hard for me to imagine similar contributions being possible to the extremely mature C ecosystem.

Reasons Not to Use Rust

There are lots of great reasons to use Rust in an OS course, but two major (and one minor) ones not to:

Rust is a very immature language (at version 0.7 at the time we started the class). The immaturity means there is little documentation, and much of what existed was not consistent with the latest version. It also means the language continues to change in subtantial and non-backwards-compatible ways which mean code we wrote for the class broke when we moved to version 0.8 mid-semester, and many of the libraries available no longer worked. The lack of documentation is a serious problem, and was the main cause of frustration for students in the class. There is a lot of ongoing effort now (including some from students in the class) to improve the available documentation for Rust, so this problem should diminish rapidly. The more fundamental problem with an immature language is that we lack the experience to have developed idioms and conventions for using the language effectively. Forty years of experience with C has led to the development of lots of idiomatic ways of doing things, and in most cases, evolutionary pressures should lead to good solutions, and these solutions are documented in lots of places that are fairly easy to search (e.g., stackoverflow). Rust has a very steep learning curve and requires programmers to think differently. As mentioned before, for ivory-tower folks the fact that a language makes you think differently is a great property, but this also makes it a lot harder to learn and is often very frustrating for people used to thinking certain ways and unable to do the things they think should be easy. Good tutorials (which hopefully we'll have ready for next semester) and exercises can help, but the way Rust pointers work is different enough from what people with experience using C and Java expect that it seems necessary to get over some pretty big hurdles before writing any non-trivial programs in Rust. Rust is not widely used in industry (yet), so it is disadvantageous to students compared to gaining experience using a commonly used language like C. I view this as a minor reason, since I don't think its really the mission of a university CS curriculum to prepare students with particular job skills, but many people bring this up so it is worth discussing. One of the problems with this view is it leads to a circle of inertia: industry uses C/Java for programming projects because that's what their current team knows and it is easy to hire C/Java programmers, and universities train C/Java programmers because that's what industry wants. Academia should be leading industry, not following it, and one way to do that is by producing graduates who know about things that are not yet widely known in industry. For individuals, however, this larger view is not too helpful (trying to change industry doesn't provide much solace for the poor sap who can't get a job, although our graduates usually have a plethora of interesting job offers, at least for the 4th years, often before finishing this class). More pragmatically, I don't see that a one semester course using C is enough to really increase someone's value as a C programmer. There are thousands of programmers with decades of experience using C, and it takes many years using C to be a top-level C programmer, so a single semester of coursework is unlikely to make someone a particularly valuable C programmer if she wasn't already a valuable employee because of her general knowledge, ability, and talents. On the other hand, if Rust takes-off as an industrial systems programming language, being one of a small number of people who already have significant experience with the language should be a considerable advantage.

Initial student reactions to Rust were largely negative.

The first assignment provided a few exercises to get students started programming in Rust and then expected them to modify a simple web server given code we provided to add an unsafe counter and make it respond to file requests. Responding to file requests required extracting the pathname from the requested URL and implementing simple file I/O. Since Rust doesn't yet have much support for strings, extracting the pathname was much more challenging than students expected, and this caused a lot of frustration. All of the students in the class were able to get the counter to work and 61 out of 65 students were able to get the file request serving to work. Of the four who couldn't, three encountered into problems with types they were not able to resolve and the other one was not able to figure out how to do string extraction.

The submission for PS1 also included an open resonse question,

(Optional) If you have any additional comments about this assignment or about how the course is going so far, here's your chance:

The responses are here. I encourage you to read all of them, but a few representative comments give the prevailing view:

Rust is interesting. I'm not sure if I like it as a choice for the class language yet or not. The safety of the language makes it valuable for operating systems, but the lack of documentation makes doing assignments difficult. To make matters worse, much of the language has been re-factored since the documentation was written, meaning that many of the methods and procedures that are documented no longer work in the current version.

Rust documentation is god awful. The web server portion of this assignment should've taken me all of three minutes, instead I spent 1.5 hrs trying to use undocumented methods. It would be nice if you could provide us with some common code snippits like reading files for instance. That is not something I should have to spend 20 minutes trying to figure out. I have the feeling that rust is going to be counter productive to actually learning anything about operating systems due to the excessive time spent on fruitless google searches/banging my head on the keyboard.

Using Rust was painful. Some library functions from Rust either didn't work correctly or yielded unexpected results that I didn't want, especially the ones from str module. So I decided to custom-develop the library functions by myself instead of being subordinate to language specific issues. I think we can better use our time learning more about the OS instead of wrestling with language specific semantics or bugs.

For the second problem set, which was focused on learning about processes, students implemented a shell. We provided a very minimal starting code that could execute a foreground program, and students had to add internal commands, support for background processes, I/O redirection, pipes, and an extension of their own design to the shell. Students worked in teams of two for this assignment, and in addition to submitting their code and answer to some questions in a web form did a demo with me or one of the course TAs where they showed off their shell and answered questions about their design and code. Of the 33 teams, 27 were able to succesfully implement all the required problems.

Problem Set 3 focused on synchronization, scheduling, and memory management, and required students to implement a much better zhtta web server than the zhttpo server from PS1. They had to make a save visitor counter using Rust's concurrency abstractions, implement an improved scheduler for a multi-tasking web server, integrate server-side shell includes (using their shell from PS2) into the server, and improve performance by adding caching. The final problems challenged students to improve the server's performance in any way they could (without sacrificing safety) or add some new functionality. Students worked in teams of three students, and each team did a demo with me after submitting the assignment. All but two of the teams were able to get everything working (although many did memory caching in a way that actually reduced the performance of their server by requiring too much synchronization), and the remaining teams were able to get most of the extended functionalities working. By this point, many students were starting to really enjoy programming in Rust, and the concurrent programming required for the web server really made some of the advantages of Rust clear.

For the final assignment, students had an open-ended project, and were free to use any language they wanted (or to do something that was not language-specific). A bit over half of the students ended up using Rust for their project. Many did things that I think are very impressive and will be useful to the larger Rust community and to future offerings of this class including Iron Kernel (an ARM kernel written in Rust with a simple file system which I'm hoping to use for an assignment in next semester's class), Rust Raytracer (a concurrent 3D renderer), tRustees: True Random Number Generation, and Regular Expression Library, as well as projects to develop Rust tutorials. See Project Submissions for links to all the projects and credits.

The final survey and official course evaluation included several questions intended to evaluate student's experience using Rust in the class. Some of the results are below; see the Final Survey Results for full details and more comments.

This could be interpreted quite positively, with nearly 70% of students not against using Rust in future offerings, and many more in strong agreement with this than in strong disagreement. On the other hand, just over 30% of students think Rust should not be used for future offerings of the course. Despite my enthusiasm and view that teaching C in the standard way would be irresponsible, the majority of students still wanted to have at least some C programming in the course. I hope with improved Rust documentation and tutorials, and a Rust kernel assignment, students will feel less longing for C next semester. This seems like more than I would expect, but it will be great if nearly half the students end up using Rust after the class. I don't see it as a problem when students don't though; the things they learned from programming in Rust should make them better programmers in any language. More than half the students made regular use of the #rust IRC. Having immediate access to helpful experienced Rust programmers and developers was a huge win.

Students overall have mixed views on Rust, with lots of extremes (some people love it, others really hate it). I view that as a fairly positive result for a first run of a course like this, and especially in light of most of the comments that are primarily about lack of documentation that should be much improved next semester. I'll be worried if these results don't improve next semester, though. See theFinal Survey Results for more specific comments.

cs4414 Fall 2013 Graduates

In summary, I really enjoyed using Rust and think it is a very promising language with a lot of potential for teaching, as well as for industrial systems programming.

It is irresponsible for universities to continue to teach students to write C code riddled with security vulnerabilities, memory leaks, and race conditions. There are good alternatives available, and Rust appears to be the most promising to me, but I hope others will explore both ways to teach students to write safe C code as well as different languages for teaching systems programming that don't suffer from the legacy flaws of C.

Thanks for reading such a long post! I would be very happy to read comments from any students, people who teach OS courses, or anyone else who is interested in programming languages and operating systems. Feel free to post public comments below, or email me directly with private comments.

Please enable JavaScript to view the comments powered by Disqus.

comments powered by


Reverse engineering my bank's security token | Thiago Valverde

$
0
0

Comments:"Reverse engineering my bank's security token | Thiago Valverde"

URL:http://valverde.me/2014/01/03/reverse-engineering-my-bank%27s-security-token


Some names have been changed or removed to protect the innocent. Disclaimer: this project‘s results do not allow me or anyone else to hack into bank accounts, or even replicate a client’s token without access to a rooted device with an active code generator. On the other hand, a malicious third party application with root privileges would have access to all the information required to generate codes, but would still need the account details (including a password) to fully compromise an account.

Also, some clarifications: This is not a security vulnerability or even criticism by any stretch. The bank‘s app is (arguably) more secure than Google Authenticator (which keeps secrets around in plaintext), and this article should be seen as praise for the bank’s app, which does things the right way by (mostly) adhering to the TOTP standard, and protects its data as well as technically possible.

My current bank, one of Brazil's largest, provides its clients with one of several methods (in addition to their passwords) to authenticate to their accounts, online and on ATMs.

New accounts are usually provided with a credit-card sized piece of paper with 70 single-use codes which are randomly requested, once per access. This requires the client to obtain a new set of codes whenever they run out of them, which is not very practical.

A better alternative is their Android app (also available in several other platforms). It provides a Google Authenticator-like code generator, except it is PIN-protected, and requires a phone call to activate but operates seamlessly after that. Or so I thought.

I found myself calling the bank every so often after changing ROMs, resetting or changing phones. The activation process is simple enough, but the hacker in me did not enjoy the ordeal. I attempted to use Titanium Backup to no avail, for reasons I would soon understand. There was just one thing left to do - reverse engineer the application and build my own. Maybe make my own physical token using an Arduino.

Before diving into the code, I had to go through the normal setup process once, which meant installing their app, calling the bank and activating it. Here are some screenshots of the process.

The first image shows the installation process. Look at all those permissions it requests! I'm sure they are all necessary for some reason, but none of them should be necessary to generate codes, right? The second image shows the actual activation process, inputting four numeric fields that have to be provided over the phone (notice I was in a call then). The third image shows the actual code generator, after a successful activation.

Reverse engineering Android apps requires a few software tools. Here's what I used for this project:

  • Android SDK

    Provides the adb command-line tool, which can pull APKs, data files and settings from the phone.

  • dex2jar

    Converts Android's Dalvik executables into JARs, which are easier to reverse engineer.

  • JD, JD-GUI

    An excellent Java bytecode decompiler.

  • Eclipse

    A Java IDE to validate discoveries during the reverse engineering process.

The first step in reverse engineering an application is obviously getting the application. It is possible to download APK files directly from Google Play, but I decided to get the file directly from my phone, using ADB.

Enable USB debugging in your cell phone, then run the following commands.

Find the package name

$ ./adb shell pm list packages | grep mybank package:com.mybank

Find the package path

$ ./adb shell pm path com.mybank package:/data/app/com.mybank-1.apk

Download the package

$ ./adb pull /data/app/com.mybank-1.apk 2950 KB/s (15613144 bytes in 5.168s)

You should now have a com.mybank-1.apk file in the current directory.

APK files can be extracted using the unzip utility, because they are ZIP files with a different extension (much like JAR files). Inside the archive, the actual code is in the classes.dex file, which I renamed to com.mybank-1.dex just to keep things organized.

Extract the package

$ unzip com.mybank-1.apk (file list omitted for brevity)

Rename and convert classes.dex to a JAR file

$ mv classes.dex com.mybank-1.dex $ ./d2j-dex2jar.sh com.mybank-1.dex dex2jar com.mybank-1.dex -> com.mybank-1-dex2jar.jar

You should now have a com.mybank-1-dex2jar.jar file in the current directory, which can be opened by JD.

After dragging the JAR file into JD-GUI, you should be greeted by a window similar to the following.

Here's where the fun begins. While there are some obvious packages containing parts of the token module, such as br.com.mybank.integrador.token, br.com.othercompany.token and com.mybank.varejo.token, it doesn't take too long to realize that the core functionality is implemented in a few of the default package classes, which are obfuscated. Bummer.

Here's a snippet from br.com.othercompany.token.GerenciadorConfig:

public void trocaPINcomLogin(int paramInt, boolean paramBoolean, Perfil paramPerfil) { if (paramPerfil == null) throw new IllegalArgumentException(a.a("1p5/eEf/sl3kbeUcP509qg==")); if (!this.jdField_a_of_type_U.jdField_a_of_type_JavaUtilHashtable.contains(paramPerfil)) throw new RuntimeException(a.a("86jcmKgr/ZshQu9aGVbuGscy2nHW4UEWqudRoUXhImQ=") + a.a("7u8KqqwqUD3a7FM339fp6pRrxUtQrHDMyqvZ6A2MurQ=")); if ((this.jdField_a_of_type_BrComOtherCompanyTokenParamsGerenciador.isPinObrigatorio()) && (!paramBoolean)) throw new RuntimeException(a.a("aMsL/5kjkXKD4K1SvpTuuJZUS0U0fL19UT2GxjJ/QzQ=")); Configuracao localConfiguracao = paramPerfil.getConfiguracao(); if ((localConfiguracao.a().a()) && (paramPerfil != this.jdField_a_of_type_BrComOtherCompanyTokenPerfil)) throw new RuntimeException(a.a("ASszutKFJW3iqDb7X/+vqAcYxTLXN2SJOIs0ne596Pu3ZoRxjiiscwhV6fT70efX")); localConfiguracao.a().a(paramInt); localConfiguracao.a().a(paramBoolean); this.jdField_a_of_type_U.a(paramPerfil); if (!paramPerfil.equals(this.jdField_a_of_type_BrComOtherCompanyTokenPerfil)) a(paramPerfil); }

Every exception thrown by this piece of code is obfuscated, as well as many of the strings used throughout the code. That is a major roadblock, since exception messages and strings in general are a great way of figuring out what the code is doing when reverse engineering something.

Luckily, their developers decided to actually show useful text when a problem occurs and an exception gets thrown, so they wrapped those obfuscated strings with a.a, presumably a decryption routine that returns the original text. That routine is not too straightforward, but it is possible to get a high level understanding of what it is doing. Here are some findings after analyzing the a class and its dependencies:

  • Class p is a base64 decoder.
  • Class b is an AES implementation. Searching for its internal strings and constants on Google revealed that it is part of Paulo Barreto's JAES, a public domain crypto library.
  • private static byte[] a in class a is an obfuscated key, which can be deobfuscated by this short C program, basically replicating a snippet of the original a.a method.

    #include<stdio.h> #include<stdlib.h> int main(int argc, const char * argv[]) { char keyin[] = {<data from decompiled method>}; char keyout[16]; int i = 0; for (i = 0; i < 16; i++) keyout[i] = keyin[i] ^ keyin[31-i]; for (i=0; i < 16; i++) printf("%01x", (unsigned char)keyout[i]); printf("\n"); return 0; }

    This code yields the AES encryption key.

Unfortunately, a.a is not just a wrapper for JAES‘s AES class. It also does some crypto of its own. Here’s some pseudopython for a.a:

def decodeExceptionString(str): aesKey = <data from previous step> xorKey = <data from decompiled method> blockSize = 16 aes = AES(aesKey) stringBytes = Base64.decode(str) outputString = "" for blockStart in xrange(0, len(stringBytes), blockSize): encryptedBlock = stringBytes[blockStart:blockStart+blockSize] plaintextBlock = aes.decrypt(encryptedBlock) outputString += plaintextBlock ^ xorKey xorKey = encryptedBlock return outputString

In a nutshell, besides AES with an obfuscated key, this class appears to implement CBC (cipher block chaining) which was not present in the original JAES library.

A simple test to make sure it works:

$ ./decode "ASszutKFJW3iqDb7X/+vqAcYxTLXN2SJOIs0ne596Pu3ZoRxjiiscwhV6fT70efX" Não é possível alterar PIN sem estar logado.

The message reads (in Portuguese) “it is not possible to change PIN without being logged in”. Success!

Deobfuscating the exception strings was a fun battle, but the war was not over yet. I had yet to figure out how to generate an authentication code myself. After looking around the code for a long while, I found a good entry point to the token generation process in the br.com.othercompany.token.dispositivo.OTP class. Here's a snippet, with the exception strings deobfuscated.

public String calculate() throws TokenException { int i = (int)Math.max(Math.min((this.a.getConfiguracao().getAjusteTemporal() + Calendar.getInstance().getTime().getTime() - 1175385600000L) / 36000L, 2147483647L), -2147483648L); a(); if (i < 0) throw new TokenException("Janela negativa"), i); int j = (0x3 & this.a.getConfiguracao().getAlgoritmos().a) >> 0; switch (j) { default: throw new TokenException("Algoritmo inválido:" + j, i); case 0: return a(i); case 1: } return o.a(this.a.getConfiguracao().getChave().a(20), i); }

This method basically generates a timestamp i which is the number of 36-second intervals since April 1st, 2007 at midnight (expressed as the Unix timestamp 1175385600000L). Why 36 seconds? That's how long each token lasts. Why April 1st, 2007 at midnight? No idea.

It also includes a correction factor (getAjusteTemporal(), which means temporal adjustment in Portuguese). I assume this is calculated at activation time as the difference between the server‘s and the device’s clocks. In this snippet, o.a is the core token generating function, and its parameters are a byte array (a key) and the current timestamp.

Finding the key

The key is obtained by calling this.a.getConfiguracao().getChave().a(20) in the previous snippet. this.a is a Perfil (profile) object; getConfiguracao() returns a Configuracao (settings) object; getChave() returns a z object; a(int) returns a byte array, which is the key itself.

The z class is obfuscated but fortunately quite simple. It is just a wrapper around a byte array up to 32 bytes in length, and its a(int) method truncates that array to the provided length. The Perfil object, in turn, gets created by the PersistenciaDB (persistence database) class, which contains a bunch of obfuscated strings:

a = a.a("DwYyIlrWxIS9ruNMCKH/PQ=="); b = a.a("SceoTjidi0XqlgRUo9hcDw=="); c = a.a("yrYBlcp8nEfVKUT9WSqTqA=="); d = a.a("jUTzBfsP/AO/Kx/1+VQ3CQ=="); e = a.a("Y56SnU/pIKROPCLHu7oFuw==") + b + a.a("38oyp4eW3xqT3TaMfWZ5RA==") + "_id" + a.a("3Q+FCEVH2PxZ31ms4WHHwNB40EbmtWzHPhwoaB1nM7lGr+9zZzuVpx5iZ4YR+KUw") + c + a.a("bYYIl6LtqthcUCCFFb7JCRSC8zr5hKIFXe5JHFCCkZA=") + d + a.a("ENCtPBu4RtFta2XI1GsQag==") + a.a("ImPhDy43f+Nr4G5ofkZz+g==");

Finally! Investigating the a.a method pays off. Here are the deobfuscated strings.

a = "token.db"; b = "perfis"; c = "nome"; d = "data"; e = "create table perfis (_id integer primary key autoincrement, nome text not null, data blob not null);";

A SQL statement, interesting. So that‘s how it stores its profiles. And there’s a filename too, token.db, probably a SQLite database. Further investigation of the carregar (load) method in PersistenciaDB class shows that, indeed, it is a SQLite database, accessed through the SQLiteDatabase class.

One might think it would only be a matter of getting the key from the database, then. Not so fast. The data blob is encrypted as well, as evidenced further in the carregar method by the use of the aa.a method (so much for descriptive names - blame the obfuscation). That method accepts as parameters the data blob, an empty buffer, and a parameter that gets passed through the carregar method - a key - truncated to 16 characters.

Before investigating the crypto behind aa.a, I decided to find the key to decrypt the blob. It gets passed as a parameter to the carregar method. After digging around for a bit, I found the class that generates the key: PersistenciaUtils. Here it is, in its entirety:

public class PersistenciaUtils { public static byte[] getChave(Context paramContext, byte[] paramArrayOfByte) { try { byte[] arrayOfByte = MessageDigest.getInstance("SHA-1").digest(getId(paramContext).getBytes()); return arrayOfByte; } catch (NoSuchAlgorithmException localNoSuchAlgorithmException) { } return new byte[20]; } public static String getId(Context paramContext) { String str = Settings.System.getString(paramContext.getContentResolver(), "android_id"); if (str == null) str = "<their default id>"; return str; } }

In other words, the SHA-1 digest of the device's android_id (a unique identifier), or a default value if that doesn't work. Notice that it hashes the hex string, not the actual bytes. So that's why Titanium Backup did not work when I tried it - I was not backing up this identifier, even though there was an option for that in Titanium Backup. It‘s too late to go back now though, let’s keep reversing this app.

To find my android_id, I used adb once again.

$ ./adb shell shell@hammerhead:/ $ content query --uri content://settings/secure --projection name:value --where "name='android_id'" Row: 0 name=android_id, value=0123456789abcdef shell@hammerhead:/ $ exit

aa.a splits the data blob in several sequential fields: a 96-byte header, a 16-byte nonce, a 16-byte tag, and the rest of the blob as cryptotext. Further inspection of the aa class reveals some more details about the obfuscated classes:

  • Class e is an implementation of EAX, an AEAD (Authenticated Encryption with Associated Data) algorithm, from JAES.
  • Class f is an implementation of CMAC (Cipher-based Message Authentication Code), also from JAES.
  • Class h is an implementation of the CTR (counter) mode, from JAES as well.
  • Class l is an unknown implementation of the SHA-1 hashing algorithm. Interestingly, it is not used by the PersistenciaUtil class, which uses the MessageDigest class instead.
  • Class m is an unknown implementation of the HMAC (keyed-Hash Message Authentication Code) algorithm.
  • Class n is a wrapper around l and m, providing HMAC-SHA1.

The method aa.a derives a second key by computing the CMAC tag of the header and uses it to decrypt the cryptotext. In pseudopython:

def decodeBlob(datablob, android_id): header = datablob[:96] nonce = datablob[96:112] tag = datablob[112:128] cryptotext = datablob[128:] key1 = SHA1(android_id)[:16] aes = AES(key1) cmac = CMAC(aes) cmac.update(header) key2 = cmac.getTag() eax = EAX(key2, aes) (validTag, plaintext) = eax.checkAndDecrypt(cryptotext, tag) if validTag: return plaintext

If the EAX authentication succeeds, aa.a returns the decrypted content to PersistenciaDB, which then interprets the decrypted data.

Looking back at the PersistenciaDB class, now at the a method which parses the decrypted data into a Perfil object, it consists basically of a deserialization of the decrypted data into several booleans, shorts, and byte arrays. It is possible to identify several of the fields, of which three stand out (their offsets were discovered by adding along the deserialization).

pin = int(blob[82:86]) key = blob[38:70] timeOffset = long(blob[90:98])

And yes, this is finally the key I was looking for. My PIN matched, which was a welcome validation that my implementation was working correctly, and my time offset was small enough to ignore.

Understanding the code generation process

The key obtained at the previous step gets truncated to 20 characters in the OTP class, which then passes it along with the timestamp to the o.a method. That method references several of the obfuscated classes identified in the previous steps, which is a relief. Based on that, here's some pseudopython for that method.

def generateToken(key, timestamp): message = [0] * 8 for i in xrange(7, 0, -1): message[i] = timestamp & 0xFF timestamp >>= 8 hmacSha1 = HMAC_SHA1(key) hmacSha1.update(message) hash = hmacSha1.getHash() k = 0xF & hash[-1] m = ((0x7F & hash[k]) << 24 | (0xFF & hash[(k + 1)]) << 16 | (0xFF & hash[(k + 2)]) << 8 | 0xFF & hash[(k + 3)]) % 1000000; return "%06d" % m

Basically the timestamp (a long, 8 bytes in length) gets (manually) turned into a big-endian byte array. That array gets hashed using HMAC-SHA1 employing the key as key, generating a hash. The last four bits of the hash determine an index at which an integer is read. Take that integer, modulo 1000000, and that‘s our code. Simple, huh? Yeah, I didn’t think so either. But it works!

A while later, I found this snippet in Google Authenticator's implementation of TOTP:

public String generateResponseCode(byte[] challenge) throws GeneralSecurityException { byte[] hash = signer.sign(challenge); int offset = hash[hash.length - 1] & 0xF; int truncatedHash = hashToInt(hash, offset) & 0x7FFFFFFF; int pinValue = truncatedHash % (int) Math.pow(10, codeLength); return padOutput(pinValue); }

Looks familiar? It's the exact same algorithm. In fact, only a couple of things prevented me from creating a Google Authenticator QR-Code from this data:

  • The arbitrary timestamp epoch of April 1st, 2007 at midnight.
  • The period, which is 30 seconds in Google Authenticator and 36 seconds in my bank's token. The key URI format used by Google Authenticator accepts a period parameter which could fix this, but the application currently ignores it.

Or, rather, a Texas Instruments Stellaris LaunchPad I had lying around. They are actually code-compatible when using the Energia IDE, and I even used some Arduino-specific libraries:

  • Cryptosuite

    A cryptographic library for Arduino (including SHA and HMAC-SHA)

  • RTClib

    A lightweight date and time library for JeeNodes and Arduinos.

  • 2x16LCD_library

    A library for 2x16 LCD (like JDH162A or HD44780) written for Energia and Stellaris Launchpad (LM4F).

The RTC part needs improvement. Since the Stellaris LaunchPad does not have an onboard RTC, the internal clock needs to be set at each startup, which is cumbersome and requires a computer to get it going, and that‘s not very practical. For now, here’s the complete code:

#include <sha1.h> #include <LCD.h> #include <RTClib.h> RTC_Millis RTC; void setup() { RTC.begin(DateTime(__DATE__, __TIME__)); LCD.init(PE_3, PE_2, PE_1, PD_3, PD_2, PD_1); LCD.print("Token"); LCD.print("valverde.me", 2, 1); delay(1000); LCD.clear(); } char token[6]; uint8_t message[8]; long timestamp = 0; long i = 0; uint8_t key[] = {<your key here>}; void showToken() { long now = RTC.now().get() - 228700800 + 7200; i = now / 36; int timeLeft = now % 36; for(int j = 7; j >= 0; j--) { message[j] = ((byte)(i & 0xFF)); i >>= 8; } Sha1.initHmac(key, 20); Sha1.writebytes(message, 8); uint8_t * hash = Sha1.resultHmac(); int k = 0xF & hash[19]; int m = ((0x7F & hash[k]) << 24 | (0xFF & hash[(k + 1)]) << 16 | (0xFF & hash[(k + 2)]) << 8 | 0xFF & hash[(k + 3)]) % 1000000; LCD.print(m, 2, 1); LCD.print(36 - timeLeft, 2, 15); } void loop() { LCD.clear(); LCD.print("Current token:"); showToken(); delay(1000); }

An interesting hack occurs on this line:

RTC.begin(DateTime(__DATE__, __TIME__));

Instead of figuring out a way to set the clock at every startup, I used this hack in which the current time is filled in by the compiler just before the code gets uploaded to the board, resulting in a close-enough internal clock. RTClib generates timestamps with an epoch of Jan 1st, 2000 at midnight, so the code generator's epoch had to be converted to 228700800. A correction factor of 7200 was also required because the compiler fills in the local time instead of UTC, so it is two hours behind for me.

It is important to mention that this project‘s results do not allow me or anyone else to hack into bank accounts, or even replicate a client’s token without access to a rooted device with an active code generator. On the other hand, a malicious third party application with root privileges would have access to all the information required to generate codes, but would still need the account details (including a password) to fully compromise an account.

I would like to thank Daniel Nascimento, Raphael Campos and Miguel Gaiowski for helping me review this article.

Finally, here's some proof that it works (a couple of seconds too fast) :)

Article 29

I’ve been programming since I was 10, but I don’t feel like a “hacker” « Playing with Negative Space

$
0
0

Comments:"I’ve been programming since I was 10, but I don’t feel like a “hacker” « Playing with Negative Space"

URL:http://blog.lizdenys.com/2014/01/03/i-do-not-feel-like-a-hacker/


When I was 10, I was programming in Logo after being introduced to it in my school's required computer class. Our teacher did not once call this programming; it was just another project among ones that usually weren't programming. I generalized almost every exercise—something that most of my classmates weren't interested in doing, and also something that can be tricky, but useful, when writing software. Instead of a teacher pointing out that I handled the assigned non-generalized exercise well, I was criticized for playing around with generalization because it was "harder to grade". Meanwhile, male classmates who wrote very similar code to my non-generalized versions were praised for their work. This was the only programming opportunity I was made aware of for the next few years, despite telling my teacher I wanted to do more things like writing in Logo. I also tried to search online for related things to do, but since I didn't know the term "programming", searching the internet circa 1999 to 2003 didn't yield much.

My second introduction to programming happened when I turned 13. Like many other teenagers, I started a blog. Even back then, blogs had some amount of a social aspect, so I ran across other blogs frequently. I fell in love with some of their designs and discovered that you could highly customize a blog's look and feel. Customization ended up being far more exciting to me than actually writing posts, and I got really into it: I learned a lot about HTML and CSS markup, then expanded my knowledge to PHP so I could write a dynamic content site that served me well. At the time, I was unaware that this was another form of programming. Forums didn't tend to refer to these skills as web programming—it was simply the task of "creating a website".

I came across my third programming opportunity at 16. Some of my high school's student advisers asked a friend and me to develop an internal registration system because we had strong math and logic backgrounds. They called this a "programming project": it was the first time something I had worked on was referred to as "programming". Despite my shouldering a significant amount of work, he got almost all of the praise. This lack of recognition was discouraging and made me feel like programming was not something people thought I could pursue. Not everything in my life was like this, however: I felt very encouraged by my mathematics and economics teachers to pursue my dreams in those fields, so that's what I initially went to college to study.

The end of my freshman year in college was the first time that anyone reacted to my interest in programming—or, as far as I could see, to anyone's interest—with something other than indifference or discouragement. I slowly realized that the negativity surrounding my previous experiences wasn't because the world was apathetic about programming; the cause was people's unease towards working with an interested young woman. This newfound constructive environment got me really fired up about the subject, and I changed my majors from math and economics to math and computer science. I finally found out about how programming was a part of a broad field known as "computer science and software engineering", a respected field full of awesome people and interesting problems. This turned out to be a fantastic decision for me, and I am eternally grateful for the friends (all male) who encouraged me to do so.

I found out a few months after graduating college that I'd secretly been hacking since I was 10. I don't mention this to many people, in part because it doesn't occur to me to do so. In fact, it was only after finishing the first draft of this post that I remembered that writing assembly on the TI-83+ in high school also counts. It was certainly valuable experience, but I guess this is is a sign that I don't tend to think of these experiences as though they were "hacking". My friends call me a "hacker", and I begrudgingly agree, but I still don't feel proud of those experiences or reflect positively on them. I feel awkward writing about them.

It also turns out that I had more opportunities than many women who were of similar age at the time, and my experiences were not positive ones, but ones that made me feel discouraged. Many women who grew up when I did were never aware that programming and "hacking" were things that they, or their male counterparts, could do. It was a field that was completely invisible to them—even as one of the lucky ones who stumbled upon opportunities early on, I still perceived the field as exclusionary at worst and invisible at best. I am not going to claim that the perceived invisibility is unique to women—for example, I grew up just outside of Chicago where there were people with software engineering jobs, but in rural areas, the field is far less represented. Still, I imagine that this is unfortunately more common among women due to the ongoing sexism surrounding the field and the effects that this has on young, impressionable women. Despite how invisible the field was to many people I know, a good number of these people, both male and female, have grown to be software engineers I respect immensely, even though they were not the "hackers" that got an early start.

Every so often, I think that the invisibility of software engineering and the sexism within the field have virtually gone away—or at least that they are going away. It certainly has in many places I frequent these days: I live in New York City, I've opted out of the SF/Silicon Valley startup scene for the time being, and I have found equal footing by being a software engineer and data scientist at a high-frequency trading company. But sadly, these problems haven't gone away. One such reminder of the gender gap is pointed out in Paul Graham's interview with The Information:

God knows what you would do to get 13 year old girls interested in computers. [...] We can't make these women look at the world through hacker eyes and start Facebook because they haven't been hacking for the past 10 years.

I don't think he deserved the flaming that he received for this statement—his statement is true. Women often haven't been "hacking for the past 10 years". The same thing can be said about a lot of male software engineers. I admit that some women and arguably more men were lucky and had the opportunities to start becoming a "hacker" early on. I am among those lucky women, but I didn't know it at the time. Now, I know it, but it's surrounded by mixed feelings. I personally feel qualified to take on the title of "hacker" because of my early in life and broad experiences with programming, but simultaneously feel that I'll never truly be one because I don't fit the stereotype and am okay with that: I wear dresses and heels instead of hoodies and sneakers, I keep a regular sleep schedule, and most of all, I'm not male. I feel like I might be earning extra respect because of my extra years of experience, but I find that advantage extremely unfair to the many spectacular "non-hacker" software engineers out there. Actually, I might not even be getting that advantage—I didn't notice I was a "hacker" for so long, so why would anyone else see it? I have to wonder how many other women who've been programming for the past 10 years also were, or still are, unable to notice it.

It's important to understand that the underrepresentation of women among "hackers" doesn't mean women had the option to become them but were uninterested. The issues of invisibility and sexism illustrated above have systematically been leaving women behind or even pushing them out of the pool. I don't have all the answers about how to "get 13 year old girls interested in computers", but I know that it has to start with the field becoming visible to them. The issues surrounding women who did not have these opportunities at a young age compound on top of the issues that I mentioned the woman "hacker" faces. In addition to being unable to self-identify with the "hacker" stereotype, starting to write code at a later age necessitates working twice as hard to "catch up" to the "hacker". Actually, doubling up on the work is becoming increasingly necessary not just to compete with the "hacker", but also to succeed at all as a software engineer. Many women, and "non-hacker" men, really spend the time needed to catch up: an impressive achievement. Unfortunately, some of these hard-working "latecomers" face imposter syndrome in the face of the desirable "hacker" stereotype—we simply haven't figured out time travel yet, so they still feel powerless compared to the stereotype.

The prevalence of the "hacker" stereotype hurts those who don't identify with it, such as women; in turn, this hurts everyone. "Hacker" doesn't equate to the best software engineer, the best founder, or much of anything other than having benefited from a longer period of time to gain experience—extra time that may or may not have been used effectively to gain additional knowledge. But that's not the really disappointing part: it's the alienating connotations the term carries. Those who haven't been given the title of "hacker" are often ignored or pull themselves out of the competitive pool because it's a term they can never earn as the time frame for doing so has passed. This rejection might even discourage bright minds from seeking to start an equivalent "hacker" training at a time some might call years too late. Wouldn't it be better for everyone if the people from all backgrounds were given the opportunity to succeed on merit and grow without overcoming unnecessary hurdles instead of focusing all our energy on the exclusionary "hacker" stereotype?

joshaber/clojurem · GitHub

For Some Value of "Magic": Blip.tv Deletes Python Content

Senator presses NSA to reveal whether it spies on members of Congress | World news | theguardian.com

$
0
0

Comments:" Senator presses NSA to reveal whether it spies on members of Congress | World news | theguardian.com "

URL:http://www.theguardian.com/world/2014/jan/03/nsa-asked-spying-congress-bernie-sanders


A US senator has bluntly asked the National Security Agency if it spies on Congress, raising the stakes for the surveillance agency’s legislative fight to preserve its broad surveillance powers.

Bernie Sanders, a Vermont independent and socialist, asked army general Keith Alexander, the NSA’s outgoing director, if the NSA “has spied, or is the NSA currently spying, on members of Congress or other American elected officials”.

Sanders, in a letter dated 3 January, defined “spying” as “gathering metadata on calls made from official or personal phones, content from websites visited or emails sent, or collecting any other data from a third party not made available to the general public in the regular course of business”.

The NSA collects the records of every phone call made and received inside the United States on an ongoing, daily basis, a revelation first published in the Guardian in June based on leaks from whistleblower Edward Snowden. Until 2011, the NSA collected the email and internet records of all Americans as well.

In response, the NSA has argued that surveillance does not occur when it acquires the voluminous amount of phone data, but rather when its analysts examine those phone records, which they must only do, pursuant to the secret court orders justifying the collection, when they have “reasonable articulable suspicion” of a connection to specific terrorist groups. Declassified rulings of the secret surveillance court known as the Fisa court documented “systemic” violations of those restrictions over the years.

Sanders’ office suggested the senator, who called the collection “clearly unconstitutional” in his letter, did consider the distinction salient.

Asked if Sanders meant the collection of legislators’ and officials’ phone data alongside every other American’s or the deliberate targeting of those officials by the powerful intelligence agency, spokesman Jeff Frank said: “He’s referring to either one.”

The NSA did not immediately return a request for comment. Hours after Sanders sent his letter, the office of the director of national intelligence announced that the Fisa court on Friday renewed the domestic phone records bulk collection for another 90 days. 

Sanders’ question is a political minefield for the NSA, and one laid as Congress is about to reconvene for the new year. Among its agenda items is a bipartisan, bicameral bill that seeks to abolish the NSA’s ability to collect data in bulk on Americans or inside the United States without suspicion of a crime or a threat to national security. Acknowledgement that it has collected the communications records of American lawmakers and other officials is likely to make it harder for the NSA to argue that it needs such broad collection powers to defend against terrorism.

Civil liberties and tech groups are planning a renewed lobbying push to pass the bill, called the USA Freedom Act, as they hope to capitalize on a White House review panel that last month recommended the NSA no longer collect so-called metadata, but rely on phone companies to store customer data for up to two years, which is longer than they currently store it. 

On Friday, Shawn Turner, the spokesman for the director of national intelligence, said in a statement that the intelligence community "continues to be open to modifications to this program that would provide additional privacy and civil liberty protections while still maintaining its operational benefits," such as having the data "held by telecommunications companies or a third party".

Advocates want an end to the metadata bulk collection as well as no expansion of phone company data record storage.

The Senate judiciary committee, whose chairman Patrick Leahy is an architect of the USA Freedom Act, announced Friday that it will hold a hearing with the review panel’s membership on 14 January.

Additionally, the Justice Department announced a formal appeal of a 16 December federal court loss over the legality and constitutionality of the NSA’s bulk phone records collection effort. The appeal follows one by the ACLU, which sought redress in a different federal court after a judge ruled 27 December that the NSA bulk collection passes constitutional muster.

The NSA has yet to directly address whether elected officials are getting caught in its broad data trawls. While senator Jeff Merkely of Oregon dramatically waved his phone at Alexander during a June hearing – “What authorized investigation gave you the grounds for acquiring my cellphone data,” Merkely asked – the NSA has typically spoken in generic terms about needing the “haystack” of information from Americans it considers necessary to suss out terrorist connections. 

The NSA and its allies have been under fire for months about their public presentation of the scope of domestic surveillance. House judiciary committee Republicans in December wrote to attorney general Eric Holder calling for an investigation of director of national intelligence James Clapper, who has acknowledged untruthfully testifying that the NSA does “not wittingly” collect data on millions of Americans.

“We must be vigilant and aggressive in protecting the American people from the very real danger of terrorist attacks,” Sanders wrote to Alexander on Friday. “I believe, however, that we can do that effectively without undermining the constitutional rights that make us a free country.”

Does Snapchat CEO Evan Spiegel need to go? - The Term Sheet: Fortune's deals blogTerm Sheet

$
0
0

Comments:"Does Snapchat's CEO need to go? - The Term Sheet: Fortune's deals blogTerm Sheet"

URL:http://finance.fortune.cnn.com/2014/01/03/snapchat-ceo-go/


Snapchat's response to its data breach is more troubling than the breach itself.


FORTUNE -- In the wake of Snapchat's massive data breach this week, one of two things has become clear. Either:

1. Snapchat CEO Evan Spiegel should be fired, or 2. Spiegel should fire whoever is advising him not to apologize for this mess.

In the days since a hacker published a database of around 4.6 million Snapchat user names and phone numbers, Snapchat has issued two public statements about the breach. One came yesterday via the company's blog. It described what had happened, what Snapchat was doing to prevent further attacks and asked users to inform Snapchat about other security vulnerabilities. The second came from Spiegel himself in a highly-edited Today Show interview with Carson Daly.

In neither venue did Snapchat explicitly apologize to users for what is obviously a massive violation of user trust. This is not about whether or not Snapchat should have fixed its security hole earlier -- it had previously acknowledged being warned about this very possibility -- or about legal liability. It's about doing right by the millions of people who use the service, in large part, because it is designed to offer a more private social networking and sharing experience than do sites like Facebook (FB) or Twitter (TWTR).

RELATED: Countdown to the Snapchat revolution

If Evan Spiegel is disinclined to apologize, or doesn't feel he should, then perhaps he really isn't up for the job. Whenever a 20-something CEO is replaced in Silicon Valley, people often say that he has been replaced by an "adult." It's usually both paternalistic and patronizing, but perhaps appropriate when the 20-something is not mature enough to say "I'm sorry."

But perhaps Spiegel really does want to utter those two words, but has been advised not to by one of those aforementioned "adults." Perhaps a board member or lawyer. In that case, then Spiegel should fire that person, or at least stop listening to them. Apologies under these circumstances are expected -- just ask Target (TGT) -- and Snapchat's failure to follow such reasonable convention has sparked a second day of stories that only serve to remind people that it was hacked.

I guess there also is a third possibility: No one has directly asked Spiegel to apologize, and his earlier failure to do so was a thoughtless oversight rather than an intentional strategy. So, just in case, I sent him an email an hour or so again with that explicit question. If he replies, I'll be sure to update this post (and rewrite lots of it). If not, someone needs to be fired.

Sign up for Dan's daily email newsletter on deals and deal-makers: GetTermSheet.com


Article 23

$
0
0

Comments:""

URL:http://bandyt.site44.com/toshiba/


Don't buy toshiba products

Toshiba says they made a mistake but they still cannot help me..

I bought a Toshiba laptop about 9 months ago, it is still under warranty. It does not turn on at all. Warranty card states that laptop is covered in the United States, US territories, Latin America and the Caribbean. This is the only reason I used to buy Toshiba. Sent the laptop to Toshiba in Guatemala and they refuse to honor the warranty because they say that Latin America is not covered. Called customer service for about 2 hours, they said that they made a mistake in printing the warranty card and that I would have to pay to repair the laptop.

I talked to the manager and she refused to help when it is clear they are wrong. So I am putting this online so everyone sees what kind of company Toshiba really is.

I have attached the warranty card and the warranty on the side of the box for evidence.

Skrekstore — A shivering unisex bracelet that investigates our perception of 5 minutes.

$
0
0

Comments:"Skrekstore — A shivering unisex bracelet that investigates our perception of 5 minutes."

URL:http://skreksto.re/products/durr


A shivering unisex bracelet that investigates our perception of 5 minutes.

Everything's gone, sign up to know when it returns!

5 minutes is a ____ time

We made Durr to explore how we perceive 5 minutes in different situations. By markedly shivering every 5 minutes, it creates a haptic rythm to make us notice the changing tempo of time.

Wait what

Time perception is our subjective understanding of how fast time passes. Our ability to accurately estimate durations depend on a range of factors. With Durr you become aware of how your brain alters the length of a bus ride, how fast you finish a beer, how time flies by when you enjoy yourself, and drags along when you wait in line at the post office.

Alpha colors

This is an internal experiment, but we wonder if more people are interested. Are you? We have made 50 of these, as a limited alpha run. It comes numbered in 5 different colors (10 of each), with a replaceable battery that lasts up to two months. The diameter is 39mm, height 9.5mm. See the color range below.

Made in Oslo

The chassis and fastening mechanism of Durr are sintered in polyamide and hand-dyed by us. The strap is laser-cut Norwegian vegetanned leather, and we program and hand-solder the electronics with RoHS-compliant (lead-free) components on ENIG-plated circuit boards. We're writing about how we make them on our blog.

No

No, you can't adjust the time between vibrations. It's ON/OFF only. Trust us, we tried everything between 2 and 15 minutes, it was either too short or too long. About 5 minutes is perfect for its purpose. Oh, and no, it's not waterproof, so don't shower with it.

Want one?

Everything's gone, sign up for updates!

Open WhisperSystems >> Blog >> The Difficulty Of Private Contact Discovery

$
0
0

Comments:"Open WhisperSystems >> Blog >> The Difficulty Of Private Contact Discovery"

URL:https://whispersystems.org/blog/contact-discovery/


Building a social network is not easy. Social networks have value proportional to their size, so participants aren’t motivated to join new social networks which aren’t already large. It’s a paradox where if people haven’t already joined, people aren’t motivated to join.

The trouble is that while building a social network is hard, most interesting software today is acutely “social.” Even privacy enhancing technology, which seems anathema to the aesthetic of social networking, is tremendously social. For people to effectively use private communication software like TextSecure, they need to be able to know how to contact their friends using TextSecure.

Access to an existing social graph makes building social apps much easier. Traditionally, social apps turn to services like Facebook or Twitter for their social graph. By using “Facebook connect” (sign in with Facebook) or “Twitter OAuth” (sign in with Twitter), it’s possible to write applications on top of an existing social graph (owned by Facebook or Twitter) instead of having to create a new one.

The Mobile Graph

The migration towards mobile devices is in some ways a threat to the traditional monopolies of the social graph. Mobile devices feature a social graph that isn’t controlled by any single entity, and which anyone can access: the device’s “contacts.”

If a service uses an identifier already listed in a typical contact card (phone number or email), it’s simple to quickly display which contacts of a user are also registered with the service and immediately make social features available to that user. This means friends don’t have to “discover” each-other on a service if they already have each-other as contacts.

The problem is that the simplest way to calculate the intersection of registered users and device contacts is to upload all the contacts in the address book to the service, index them, reverse index them, send the client the intersection, and subsequently notify the client when any of those contacts later register.

This is not what you’d call a “privacy preserving” process. Many people are uncomfortable with their entire address book being sent to a server somewhere. Maybe they’re worried the server will spam their friends, maybe they have some sensitive contacts they don’t want to share, or maybe it just doesn’t feel right.

Solutions That Don’t Work

Ideally we could come up with a privacy preserving mechanism for achieving the same thing. The problem becomes: How do we determine which contacts are registered with a service, without revealing the contacts to the service?

Hash It!

The first instinct is often just to hash the contact information before sending it to the server. If the server has the SHA256 hash of every registered user, it can just check to see if those match any of the SHA256 hashes of contacts transmitted by a client.

Unfortunately, this doesn’t work because the “preimage space” (the set of all possible hash inputs) is small enough to easily calculate a map of all possible hash inputs to hash outputs. There are only roughly 10^10 phone numbers, and while the set of all possible email addresses is less finite, it’s still not terribly great. Inverting these hashes is basically a straightforward dictionary attack. It’s not possible to “salt” the hashes, either (they always have to match), which makes building rainbow tables possible.

Bloom Filters and Encrypted Bloom Filters

There’s an entire field of study dedicated to problems like this one, known as “private information retrieval” (PIR). The simplest form of PIR is for the server to send the client the entire list of registered users, which the client can then query locally. Basically, if the client has its own copy of the entire database, it won’t leak its database queries to the server.

One can make this slightly more network efficient by transmitting the list of registered users in a bloom filter tuned for a low false positive rate.

To avoid leaking the list of all registered users, it’s even possible to build a “symmetric PIR” system using “encrypted bloom filters” by doing the following:

The server generates an RSA key pair which is kept private. Rather than putting every user into a bloom filter, the server puts the RSA signature of each user into the bloom filter instead. The client requests the bloom filter, which contains an RSA signature of each registered user. When the client wishes to query the local bloom filter, it constructs a “blinded” query as per David Chaum’s blind signature scheme. The client transmits the blinded query to the server. The server signs the blinded query and transmits it back to the client. The client unblinds the query to reveal the server’s RSA signature of the contact it wishes to query. The client then checks its local bloom filter for that value.

It’s also possible to compress “updates” to the bloom filter. The server just needs to calculate the XOR of the version the client has and the updated version, then run that through LZMA (the input will be mostly zeros), and transmit the compressed diff to the client.

Unfortunately, for a reasonably large user base, this strategy doesn’t work because the bloom filters themselves are too large to transmit to mobile clients. For 10 million TextSecure users, a bloom filter like this would be ~40MB, requested from the server 116 times a second if every client refreshes it once a day.

Sharded Bloom Filters

Instead of putting all registered users into a single bloom filter, one could imagine sharding the users into buckets, each of which contains a bloom filter for that shard of users.

This creates a privacy vs. network overhead trade-off. At one end of the spectrum, a single big bucket with all registered users in it provides perfect privacy but high network overhead: the server learns nothing when it’s requested, but has to transmit a lot. At the other end of the spectrum, many buckets, each of which contains a bloom filter with only a single user provides zero privacy but low network overhead: the server learns exactly what the full query is whenever a bucket is requested, but only has to transmit very little.

The hope would be that somewhere in the middle there might be an acceptable trade-off. If there were only two buckets, for instance, the client would only have to leak one bit of information to the server, but would be able to retrieve a somewhat smaller bloom filter.

In the end, it’s difficult to find an acceptable trade-off here. The average Android user has approximately 5000 contacts synced to their device. Ignoring the slight collision rate that kicks in with the birthday paradox, in the worst case this means the client will end up requesting 5000 different buckets. In order to make the total download size add up to something like a reasonable 2MB, this means each bucket can only represent 100 possible identifiers.

This basically provides very little privacy. In the case of phone numbers as an identifier, of the 100 possible numbers a client could be inquiring about in its request for a bucket, many of the 99 “other” numbers might not be active phone numbers, or might represent phone numbers located in regions where contacts of that user would be unlikely to be located.

In the end, this strategy seems to provide very little at the cost of fairly substantial complexity.

That Academic Private Set Intersection Stuff

There are cryptographic protocols for performing privacy-preserving set intersections, but they generally involve transmitting something which represents the full set of both parties, which is a non-starter when one of the sets contains at least 10 million records.

The academic field of PIR is also quite a disappointment. It’s possible to improve on the network overhead of transmitting the entire data set, but generally at great computational cost. Intuitively, for the server not to learn anything about the query, it will need to at least touch every possible record in its data set when calculating a response. With a large number of users, this is potentially cost-prohibitive even if the server only needs to perform a simple operation (like XOR) for each registered user. Most PIR protocols are substantially worse, and require things like performing an RSA operation over each record of all registered users for each client query. That would translate into a client requesting that the server perform 10 million RSA operations, 5000 times!

This includes the small collection of protocols labeled “practical PIR,” which unfortunately don’t seem to be very practical at all. Most of them describe protocols that would take minutes of CPU-bound work for each client that wants to determine which of its contacts are registered users.

An Unsolved Problem

As far as we can determine, practical privacy preserving contact discovery remains an unsolved problem.

For RedPhone, our user base is still manageable enough (for now) to use the bloom filter technique. For TextSecure, however, we’ve grown beyond the size where that remains practical, so the only thing we can do is write the server such that it doesn’t store the transmitted contact information, inform the user, and give them the choice of opting out.

This would be a great problem to really solve well. If anyone develops any insight into a practical privacy preserving mechanism for achieving this type of contact discovery, or notices something we’ve missed, please let us know here, on twitter, or on our development mailing list.

Moxie Marlinspike, 03 January 2014

Andreessen: Tech Bubble Believers 'Don't Know What They're Talking About' - WSJ.com

$
0
0

Comments:"Andreessen: Tech Bubble Believers 'Don't Know What They're Talking About' - WSJ.com"

URL:http://online.wsj.com/news/article_email/SB10001424052702303640604579298330921690014-lMyQjAxMTA0MDAwMzEwNDMyWj


Updated Jan. 3, 2014 1:10 p.m. ET

In a 2011 essay in The Wall Street Journal, venture capitalist and Internet pioneer Marc Andreessen predicted that software companies are "eating the world" by replacing old industries with new services that are smarter, faster and cheaper.

If anything, Andreessen's prophecy is unfolding ahead of schedule. The smartphone is now a portal into a taxi ride, a doctor's appointment or a date.

Startups like Airbnb Inc., TaskRabbit Inc. and RelayRides Inc. have used software apps to pioneer a new economy where consumers share their materials and services. Google Inc.,GOOG -0.73%Google Inc. Cl AU.S.: Nasdaq $1105.00-8.12-0.73% Jan. 3, 2014 4:00 pm Volume (Delayed 15m) : 1.63MAFTER HOURS $1104.80-0.20-0.02% Jan. 3, 2014 6:52 pm Volume (Delayed 15m):33,663 P/E Ratio31.33Market Cap $371.88 Billion Dividend YieldN/ARev. per Employee$1,059,71001/03/14 The Morning Download: Mandiant...01/02/14 Why Google's Stock Surge Matte...01/02/14 Chinese Smartphone Maker Xiaom...More quote details and news »GOOGinYour ValueYour ChangeShort position the 12th most-valuable company by market capitalization when Mr. Andreessen's essay was published, is now third on that list.

In an interview with The Wall Street Journal, Mr. Andreessen looked back on his predictions and made some new ones for the year ahead. He stands by his assertion that the rise of valuable new software companies is a fundamental economic shift—rather than a bubble—and explains the multibillion-dollar valuations of Pinterest Inc., a startup his venture-capital firm Andreessen Horowitz invested in, and Snapchat, one his firm didn't.

Edited excerpts:

WSJ: What's driving the current speed of technological progress?

Mr. Andreessen: It's only really in the last two years that the smartphone has now become a mass-market phenomenon. Heading into 2014, I think the number is like 2 billion smartphones in the world, and that number is growing really fast. Within three years, I don't think it's going to be possible to buy a phone that's not a smartphone. The vendors are literally going to stop making these low end feature phones, they're just going to make smartphones instead. That's what takes us to 5 billion.

We're just now starting to live in the world where everybody has a supercomputer in their pocket and everybody's connected. And so we're just starting to see the implications of that.

WSJ: Does that further the power of those who control the platforms— Apple Inc.AAPL -2.20%Apple Inc.U.S.: Nasdaq $540.98-12.15-2.20% Jan. 3, 2014 4:00 pm Volume (Delayed 15m) : 13.70MAFTER HOURS $540.90-0.08-0.01% Jan. 3, 2014 7:28 pm Volume (Delayed 15m):335,185 P/E Ratio13.55Market Cap $493.70 Billion Dividend Yield2.26% Rev. per Employee$2,127,85001/03/14 Gracenote to Help Launch Music...01/03/14 Gracenote to Help Launch Music...01/03/14 China Mobile's Costly iPhone D...More quote details and news »AAPLinYour ValueYour ChangeShort position and Google Inc.?

Mr. Andreessen: On current trends, yes. Apple and Google are both in extraordinarily powerful positions because they are the two dominant platforms owners in this new world. And there's no question they are both gaining strength right now. The big question for Apple is: Can they hold market share in the high end of the market? And the question for Google is: Can they keep Android together, or does Android fragment, especially overseas?

WSJ: There's a flood of capital going into a range of software companies. Is this sustainable?

Mr. Andreessen: In my opinion, there's nothing broad-based that's happening. There's no bubble, per se. Bubbles are a very specific phenomenon where you've got mass psychology and you've got every mom and pop investor and every cabdriver and every shoe-shine boy buying stock in whatever it is—going all the way back to the South Sea Bubble all the way through to the dot-com bubble.

There's nothing like that. We're talking about a fairly small number of companies. And then, we're talking almost entirely on the private side. It hasn't really affected the public market that much.

Andreessen's Portfolio

Winners

  • Nicira—Cloud networking startup sold to VMWare for about $1.26 billion in 2012
  • Skype—Microsoft's purchase of the voice-over-Internet provider netted Andreessen $153 million less than two years after their investment
  • Zulily—Shares of the online retailer jumped 71% in their November debut

Losers

  • Fab—The e-commerce site has laid off staff and fallen short of its revenue targets after expanding overseas too quickly
  • Kno—Education software maker sold to Intel this year for a fraction of what venture capitalists put in
  • Zynga—Shares of the game maker still sag more than 60% below their IPO price of $10 in November 2011

WSJ: How is this different than the dot-com bubble?

Mr. Andreessen: The costs of building an Internet company today are far lower than they were in the late '90s. In the '90s if you wanted to build an Internet company, you needed to buy Sun servers, CiscoCSCO -0.09%Cisco Systems Inc.U.S.: Nasdaq $21.98-0.02-0.09% Jan. 3, 2014 4:00 pm Volume (Delayed 15m) : 36.22MAFTER HOURS $21.94-0.04-0.18% Jan. 3, 2014 6:33 pm Volume (Delayed 15m):110,496 P/E Ratio11.88Market Cap $117.63 Billion Dividend Yield3.09% Rev. per Employee$650,45501/02/14 Cybersecurity Deal: FireEye Bu...01/02/14 Huawei Calls for Unity Among T...12/30/13 NSA Spying Becomes Economic Is...More quote details and news »CSCOinYour ValueYour ChangeShort position networking gear, OracleORCL -0.27%Oracle Corp.U.S.: NYSE $37.62-0.10-0.27% Jan. 3, 2014 4:02 pm Volume (Delayed 15m) : 11.58MAFTER HOURS $37.620.000.00% Jan. 3, 2014 5:15 pm Volume (Delayed 15m):110,527 P/E Ratio15.81Market Cap $170.18 Billion Dividend Yield1.28% Rev. per Employee$312,93312/20/13 Oracle to Buy Responsys for $1...12/20/13 Responsys: Post-IPO Patience P...12/18/13 Oracle's Revenue Rises 2% on C...More quote details and news »ORCLinYour ValueYour ChangeShort position databases, and EMCEMC 0.00%EMC Corp.U.S.: NYSE $25.070.000.00% Jan. 3, 2014 4:00 pm Volume (Delayed 15m) : 12.75MAFTER HOURS $25.070.000.00% Jan. 3, 2014 4:24 pm Volume (Delayed 15m):22,141 P/E Ratio19.06Market Cap $51.59 Billion Dividend Yield1.60% Rev. per Employee$375,95501/02/14 The Morning Risk Report: 3 Key...12/23/13 Finnish Security Researcher Ca...12/23/13 RSA Denies Secret Deal with NS...More quote details and news »EMCinYour ValueYour ChangeShort position storage systems. And those companies would charge you a ton of money even just to get up and running. The new startups today, they don't buy any of that stuff. They don't buy literally anything from any of those companies. Instead, they go on Amazon Web Services and they pay by the drink and they're paying somewhere between 100x and 1000x cheaper per unit—per unit of compute, per unit of storage, per unit of networking, per unit of software.

In retrospect, it's a miracle that anything worked in the late '90s given how limited the market was and given how expensive it was. It's a miracle that eBayEBAY -1.26%eBay Inc.U.S.: Nasdaq $53.26-0.68-1.26% Jan. 3, 2014 4:00 pm Volume (Delayed 15m) : 9.46MAFTER HOURS $53.40+0.14+0.26% Jan. 3, 2014 6:34 pm Volume (Delayed 15m):135,112 P/E Ratio25.12Market Cap $69.83 Billion Dividend YieldN/ARev. per Employee$491,93612/23/13 Holidays Come Early for Accel ...12/20/13 Amazon vs. Google: It's A War ...12/20/13 ModCloth CFO: Four Metrics Tha...More quote details and news »EBAYinYour ValueYour ChangeShort position worked, it's a miracle that Amazon worked.

The devil's in the details. It's really up to each company to demonstrate that it's going to be a franchise company and demonstrate over time that it can monetize appropriately. The ones that make it work are going to be enormously valuable. This is a time of very big franchise creation. The people who say it's all like the '90s and it's all going to come crashing down just don't know what they're talking about.

WSJ: But is there enough demand out there for two or three or more players in these categories that are getting a lot of venture money?

Mr. Andreessen: Generally in tech, the markets are winner take all. Google still competes with people in search and so forth, but over time, Google is gaining share against MicrosoftMSFT -0.67%Microsoft Corp.U.S.: Nasdaq $36.91-0.25-0.67% Jan. 3, 2014 4:00 pm Volume (Delayed 15m) : 29.95MAFTER HOURS $36.910.000.00% Jan. 3, 2014 5:15 pm Volume (Delayed 15m):1.19M P/E Ratio13.67Market Cap $310.21 Billion Dividend Yield3.03% Rev. per Employee$810,01001/03/14 When CEOs Face a Board With a ...01/01/14 Skype Social Media Accounts Ha...12/29/13 Google, Apple Forge Auto TiesMore quote details and news »MSFTinYour ValueYour ChangeShort position and against Yahoo.YHOO +1.34%Yahoo! Inc.U.S.: Nasdaq $40.12+0.53+1.34% Jan. 3, 2014 4:00 pm Volume (Delayed 15m) : 15.69MAFTER HOURS $40.18+0.06+0.15% Jan. 3, 2014 6:35 pm Volume (Delayed 15m):73,815 P/E Ratio33.43Market Cap $40.16 Billion Dividend YieldN/ARev. per Employee$406,87301/01/14 Pressure to Build in 2014 on S...12/30/13 NSA Spying Becomes Economic Is...12/26/13 Twitter's Ballooning Market Ca...More quote details and news »YHOOinYour ValueYour ChangeShort position

I think it's a question of: What are the categories versus industries? Are Dropbox and Box the same thing or are they different things? One way of looking at it is it's the same thing. Another way of looking at it—which is what we believe—is they are very different. Because one is consumer focused, the other is enterprise focused.

Another example is: Should all the sharing-economy companies just be one company? We think the answer is no. We think there is a big winner per vertical.

WSJ: With a lot of companies getting funding across the board, inevitably, many will fail. Is failure a good thing or a bad thing in Silicon Valley?

Mr. Andreessen: I'm really schizophrenic on this. I can argue both sides of it. The Midwestern Protestant in me is very strongly on the side of failure is terrible and horrible and awful and the goal of every entrepreneur should be to not fail. This whole thing where failure is somehow good in Silicon Valley, or failure is OK, or failure is wonderful, or failure is part of the process, is just a bunch of nonsense, and is actually a destructive sort of meme because it gives people an easy excuse to give up. If you look at a lot of the great successes in corporate history and in technology, they required real determination and real staying power.

The other side of it that I can argue equally enthusiastically, is that an enormous cultural positive for the Valley and more broadly the U.S. is that failure does not end your career. Failure is not a mark of shame that means you are done in your field—which is true in a lot of the rest of the world. In the Valley, it means you have valuable experience. One of the things I always tell our entrepreneurs is, don't just hire people out of successful companies, because the people out of successful companies didn't learn anything. Maybe they were just along for the ride. Whereas, the people who have been through tough times tend to be much more resilient and they tend to be much more determined and they're not daunted by things being hard.

The way I try to resolve it is, I think there's a grain of truth on both sides. I think both are kind of true and then it's just a question of nuance and judgment. You really can't just give up the minute things get hard. But at the same time, not everything works. And when something doesn't work, it shouldn't end your career, it should just inform the next thing you do. And that's kind of how the Valley works.

WSJ: How do you get behind startups with no business model? With Pinterest, how do you get from zero revenue to a $3.8 billion valuation?

Mr. Andreessen: There are two categories of companies like this. You can guess which one I think Pinterest is in.

There are the ones where everybody thinks they don't know how they're going to make money but they actually know. There's this kind of Kabuki dance that sometimes these companies put on where we're just a bunch of kids and we're just farting around and I don't know how we're going to make money. It's an act. They do it because they can. They don't let anyone else realize they have it figured out because that would just draw more competition. FacebookFB -0.28%Facebook Inc. Cl AU.S.: Nasdaq $54.56-0.15-0.28% Jan. 3, 2014 4:00 pm Volume (Delayed 15m) : 38.16MAFTER HOURS $54.50-0.06-0.10% Jan. 3, 2014 7:26 pm Volume (Delayed 15m):111,068 P/E Ratio126.88Market Cap $137.78 Billion Dividend YieldN/ARev. per Employee$1,487,77012/29/13 Facebook Page Chronicles Accou...12/27/13 Crystal Ball: Test Your Predic...12/26/13 Morning MoneyBeat: Stocks in R...More quote details and news »FBinYour ValueYour ChangeShort position always knew, LinkedIn always knew, and TwitterTWTR +2.22%Twitter Inc.U.S.: NYSE $69.00+1.50+2.22% Jan. 3, 2014 4:01 pm Volume (Delayed 15m) : 33.04MAFTER HOURS $69.06+0.06+0.09% Jan. 3, 2014 7:19 pm Volume (Delayed 15m):195,843 P/E RatioN/AMarket Cap $38.31 Billion Dividend YieldN/ARev. per Employee$267,23101/03/14 Twitter Near $70 Valued at $38...01/02/14 Cravings Still Strong for IPOs12/31/13 Twitter Ends IPO Year on Up No...More quote details and news »TWTRinYour ValueYour ChangeShort position always knew.

They knew the nature of the valuable product they were going to be able to offer and they knew people were going to pay for it. They hadn't defined it down to the degree of being ready to ship it, or they didn't have a sales force yet, so there were things that they hadn't yet done. But they knew. They had a high level of confidence and over the passage of time we discovered they were correct.

Now, there are other companies that honestly have no idea. Like, they really honestly have no idea. You need to be very cautious on these things because one of the companies that had no idea how it was going to make money when it first started was Google.

WSJ: Which type is Snapchat?

Mr. Andreessen: The bull case on Snapchat is that there's a company in China called Tencent that's worth $100 billion. And Tencent is worth $100 billion because it takes its messaging services on a smartphone and then wraps them in a wide range of services—things like gaming and social networking and emojis, and video chat—and then charges for all these add-on services. And it has been one of the most successful technology companies of all time and is worth literally $100 billion on the Hong Kong Stock Exchange. Maybe that's [CEO Evan Spiegel's] plan. Maybe Evan's plan is to transplant the Tencent business model into the U.S., which nobody has actually been able to do yet.

WSJ: As software eats the world, what happens to the older, incumbent businesses being attacked by newer startups? Will they simply decay and die and go away, or will they adapt?

Mr. Andreessen: If somebody wants to go into a full defensive crouch, some people do choose to do that. But I think more and more big companies are thinking OK, there are big opportunities here, there are new ways to reach out to customers, to ensure our customers are happy.

Weather Explorer

Viewing all 9433 articles
Browse latest View live