Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Redesigned Conversations · GitHub

$
0
0

Comments:"Redesigned Conversations · GitHub"

URL:https://github.com/blog/1767-redesigned-conversations


Today we're excited to ship redesigned conversations on GitHub. Here's an example:

More meaningful conversations

Scanning and working with all the content available in conversations—replies, CI status, commits, code review comments—is now easier than ever.

Comments now stand out as the most important elements in a conversation. Comments that you make are also highlighted blue. Anything that isn't a comment—like commits or issue references—has been subdued to better differentiate content. All of these changes come together to help you focus on what matters most in a particular conversation.

Streamlined layout

We've consolidated and moved management tools for Issues and Pull Requests into the sidebar. Add or remove labels, update milestones, subscribe to notifications, and assign people within one spot. Also, you can now manage labels directly on Pull Requests.

Additionally, titles and state indicators (open, closed, or merged) have been moved to a more prominent header to quickly and easily identify the Issue or Pull Request you're viewing.

See the new conversations today in one of your favorite GitHub repositories.

Need help or found a bug? Contact us.


5 Mistakes Programmers Make when Starting in Machine Learning | Machine Learning Mastery

$
0
0

Comments:"5 Mistakes Programmers Make when Starting in Machine Learning | Machine Learning Mastery"

URL:http://machinelearningmastery.com/mistakes-programmers-make-when-starting-in-machine-learning/


There is no right way to get into machine learning. We all learn slightly different ways and have different objectives of what we want to do with or for machine learning.

A common goal is to get productive with machine learning quickly. If that is your goal than this post highlights five common mistakes programmers make on the path to quickly being productive machine learning practitioners.

Mistakes Programmers Make when Starting in Machine Learning
Photo credited to aarontait, some rights reserved.

1. Put Machine Learning on a pedestal

Machine learning is just another bag of techniques that you can use to create solutions to complex problems.

Because it is a burgeoning field, machine learning is typically communicated in academic publications and textbooks for postgraduate students. This gives it the appearance that it is elite and impenetrable.

A mindset shift is required to be effective at machine learning, from technology to process, from precision to “good enough”, but the same could be said for other complex methods that programmers are interested in adopting.

2. Write Machine Learning Code

Starting in machine learning by writing code can be make things difficult because it means that you are solving at least two problems rather than one: how a techniques so that you can implement it and how to apply the technique to a given problem.

It is much easier to work on one problem at a time and leverage machine learning and statistical environments and libraries of algorithms to learn how to apply a technique to a problem. This allows you to spot check and tune a variety of algorithms relatively quickly and tune the one or two that look promising rather than investing large amounts of time interpreting ambiguous research papers containing algorithm descriptions.

Implementing an algorithm can be treated as a separate project to be completed at a later time, such as for a learning exercise or if the prototype system needs to me put into operations. Learn one thing at a time, I recommend starting with a GUI based machine learning framework whether you’re a programmer or not.

3. Doing Things Manually

A process surrounds applied machine learning including problem definition, data preparation and presentation of results, among other tasks. These processes along with the testing and tuning of algorithms can and should be automated.

Automation is a big part of modern software development for builds, tests and deployment. There is great advantage in scripting data preparation, algorithm testing and tuning and the preparation of results in order to gain the benefits of rigor and speed of improvement. Remember and reuse the lessons learned in professional software development.

The failure to start with automation (such as Makefiles or similar build system) is likely due to the fact that many programmers come to machine learning from books and courses that have less focus on the applied nature of the field. In fact, brining automation to applied machine learning is a huge opportunity for programmers.

4. Reinvent Solutions to Common Problems

Hundreds and thousands of people have likely implemented the algorithm you are implemented before you or have solved a problem type similar to the problem you are solving, exploit their lessons learned.

There is a wealth of knowledge out there of solving applied machine learning. Granted much of it may be tied up in books and research publications, but you can access it. Do your homework and search Google, Google Books, Google Scholar and reach out to the machine learning community.

If you are implementing an algorithm:

  • Do you have to implement it? Can you reuse an existing open source algorithm implementation in a library or tool?
  • Do you have to implement from scratch? Can you code review, learn from or port an existing open source implementation?
  • Do you have to interpret the canonical algorithm description? Are there algorithm descriptions in other books, papers, theses, or blog posts that you can review and learn from?

Photo credited to justgrimes, some rights reserved

If you are addressing a problem:

  • Do you have to test all algorithms on the problem? Can you exploit studies on this or similar problem instances of the same general type that suggest algorithms and algorithm classes that perform well?
  • Do you have to collect your own data? Are their publicly available data sets or APIs that you can use directly or as a proxy for your problem to quickly learn which methods are likely to perform well?
  • Do you have to optimize the parameters of the algorithm? Are the heuristics you can use for configuring the algorithm presented in papers or studies of the algorithm?

What would be your strategy if you have a problem with a programming library or a specific type of data structure? Use the same tactics in the field of machine learning. Reach out to the community and ask for resources that you may be able to exploit to accelerate your learning and progress on your project. Consider forums and Q&A sites to start with and contact academics and specialists as the next step.

5. Ignoring the Math

You do not need the mathematical theory to get started, but maths is a big part of machine learning. The reason for this is it provides perhaps the most efficient and unambiguous way to describe problems and the behaviors of systems.

Ignoring the mathematical treatments of algorithms can lead to problems such as having a limited understanding of a method or adopting a limited interpretation of an algorithm. For example, many machine learning algorithms have an optimization at their core that is incrementally updated. Knowing about the nature of the optimization being solved (is the function convex) allows you to use efficient optimization algorithms that exploit this knowledge.

Internalizing the mathematical treatment of algorithms is slow and comes with mastery. Particularly if you are implementing advanced algorithms from scratch including the internal optimization algorithms, take the time to learn the algorithm from mathematical perspective.

Summary

In this post you learned about 5 common mistakes that programmers make when getting started in machine learning. The five lessons are:

  • Don’t put machine learning on a pedestal
  • Don’t write machine learning code
  • Don’t do things manually
  • Don’t reinvent solutions to common problems
  • Don’t ignore the math

Right on target: New era of fast genetic engineering - life - 28 January 2014 - New Scientist

$
0
0

Comments:"Right on target: New era of fast genetic engineering - life - 28 January 2014 - New Scientist"

URL:http://www.newscientist.com/article/mg22129530.900-right-on-target-new-era-of-fast-genetic-engineering.html#.Uuf4hXfTlok


(Image: Kotryna Zukauskaite)

A simple, very powerful method is making genome editing much easier and faster – prepare for a revolution in biology and medicine

SEQUENCING genomes has become easy. Understanding them remains incredibly hard. While the trickle of sequence information has turned into a raging torrent, our knowledge isn't keeping up. We still have very little understanding of what, if anything, all our DNA does.

This is not a problem that can be solved by computers. Ultimately, there is only one way to be sure what a particular bit of DNA does – you have to alter it in real, living cells to see what happens. But genetic engineering is very difficult and expensive.

At least, it used to be. Last month, two groups announced that they had performed a mind-boggling feat. They targeted and disabled nearly every one of our genes in cells growing in a dish. They didn't knock out all the genes in each cell at once, of course, but one gene at a time. That is, they individually modified a staggering 20,000 genes. "It's truly remarkable," says Eric Lander, director of the Broad Institute of MIT and Harvard, who led one of the studies. "This is transformative."

To put it into perspective, in 2007 an international project was launched to target and "knock out" each of the 20,000 genes a mouse possesses. It took the collective effort of numerous labs around the world more than five years to complete, and it cost $100 million. Now two small teams have each done something similar in a fraction of the time and cost. The secret: a simple and powerful new way of editing genomes. The term breakthrough is overused, but this undoubtedly is one. "It's a game-changer," says Feng Zhang, also at the Broad Institute, who led the other study.

The technique, unveiled just a year ago, is generating tremendous excitement as its potential becomes clear. It is already starting to accelerate the pace of research – Lander and Zhang used it to find out which genes help cancer cells resist a drug, for instance. In years to come, it is likely to be used in gene therapy, and to create a new generation of genetically engineered organisms with extensive but precise changes to their genomes. And if we ever do decide to genetically modify people, this is the tool to do it with.

While genetic engineers have done some amazing things, their first tools were very crude. They bombarded cells with extra DNA – sometimes literally– in the hope that it might occasionally get added to a cell's genome. But there was no way to control where in the genome it went, and if added DNA ends up in the wrong place it can cause havoc. Also, this approach does not allow for any tinkering with existing genes, which is the key to finding out what they and their variants do.

So in the past couple of decades the focus has switched to genome editing. To visualise how it works, imagine the genome as a collection of cookbooks written on long scrolls of paper and cared for by blind librarians. The librarians try to repair any damage but because they can't read they are easily tricked.

If you cut a scroll in two in the middle of a recipe, the librarians will join the pieces together again but in the process they often wreck the recipe. In other words, you can disable, or "knock out", a gene by cutting it.

What's more, if you add an extra piece of paper and then cut a scroll in two, the librarians will often assume the piece was cut from the scroll and add it in where the cut was made. In this way, segments of DNA can be added exactly where you want.

So the secret of genome editing is to cut DNA at just the right spot, and let the cell's DNA repair mechanisms do the rest for you. In practice, this means finding a molecule that, if added to a cell, will bind to a specific DNA sequence and cut the DNA at that point. There are natural proteins that do exactly this, but the chances of finding an existing protein that happens to target the one site in the entire genome that you are interested in are vanishingly small.

Instead, artificial proteins that bind to a specific DNA sequence have to be designed, made and tested for each edit you want to make. That can and is being done in many research labs around the world. Indeed, this kind of gene editing could soon be used in gene therapies to treat everything from sickle cell anaemia to HIV. Yet although there are now various tricks for speeding up the process of creating a designer DNA-binding protein, it is still far from easy. It can still take months or years of work to do yourself, or cost tens of thousands of dollars to have it done for you. To complicate matters further, much of the underlying technology has been patented.

If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.

The Next Big Thing You Missed: A Tiny Startup's Plot to Beat Google at Big Data | Wired Business | Wired.com

$
0
0

Comments:"The Next Big Thing You Missed: A Tiny Startup's Plot to Beat Google at Big Data | Wired Business | Wired.com"

URL:http://wrd.cm/1k2ZiLj


Ryan Spraetz helps run a Silicon startup that aims to remake the future of online business, but he describes it with a metaphor that dates to the 16th century.

More than 400 years ago, Spraetz says one afternoon outside a San Francisco coffee shop, a Danish nobleman named Tycho Brahe spent most of his adult life collecting data that described the night sky. Each night for more than 30 years, Brahe would climb into his observatory and record the brightness and the position of the stars overhead. Then he died. But his young assistant, Johannes Kepler, would go on to use Brahe’s massive trove of data to formulate the three laws of planetary motion, the laws that proved the Earth revolves around the sun.

‘Because Brahe dedicated his whole life to gathering all that data, Kepler is now cemented into history.’

— Ryan Spraetz

“Because Brahe dedicated his whole life to gathering all that data, Kepler is now cemented into history,” Spraetz says, and this becomes an on-ramp to his startup, a 15-person company called KeenIO. As Spraetz explains it, Keen aims to provide the world’s online businesses with ready access to the sort of detailed data so diligently gathered by Brahe, giving them the information they need to make the big leap forward — to, as Spraetz puts it, “turn them into Keplers.”

It’s a highfalutin pitch — honed over several months during Keen’s residence in the startup incubator Tech Stars— but it’s also a captivating tale, and it taps into a sweeping trend in the tech world. Web giants like Google and Facebook and Twitter have achieved huge success in large part because of their ability to analyze the enormous amounts of data their online services generate — to see exactly how their businesses are operating at the lowest of levels — and now, many startups and open source projects aim to bring this “Big Data” know-how to the rest of the world.

At the same time, Keen is different. Some big data outfits offer massively complicated data analysis tools that run across hundreds of servers and require hard-core engineering talent. Others provide polished iPad apps that let you analyze data in simpler, and less powerful, ways. Keen aims to find a sweet spot, offering tools that are both simple and malleable, tools that let you readily use massive amounts of data in precisely the way you want to use it.

“We’re an alternative to building your own software,” says Kyle Wild, Keen’s CEO, who founded the company alongside Spraetz and another engineer named Daniel Kador, two close friends from his days at IMSA, a live-in Illinois high school known for breeding tech talent.

The trio launched the company out of Wild’s San Francisco home, but as the operation has grown — recently attracting $2.35 million in venture funding— Keen has moved into a communal startup space in the city’s South of Market district. Run by an operation called Heavybit Industries, this space is solely for startups that sell tools to the world’s software developers. It aims to help these startups create a new kind of software infrastructure that makes it all the easier for developers and businesses to build exactly what they want to build. Keen is the poster child of this new movement.

The Origin Story

Keen can trace its roots to Wild’s time as an engineer at the online games company FableLabs. One day, the data analysis guy left for another venture, and the data duties fell to Wild. He spent a few months building a central engine that let the company readily crunch all sorts of data, as opposed analyzing data in ad hoc ways each time a question came up.

As Wild tells it, this immediately boosted the efficiency of its gaming service. In order to use the site, gamers were required to take an online tutorial, and with his new data analysis engine, Wild says, the company soon determined that the length of the tutorial could directly effect the its bottom line. If the tutorial was expanded, fewer people would actually make it onto the site, he explains, but they would end up spending more money. “That’s something you can only learn with really deep analytics,” Wild says. “It’s stuff like that let us go toe-to-toe with Zynga using only a few people.”

The tool was so effective, Wild eventually quit his job to found Keen. Basically, Keen offers a set of application programming interfaces, or APIs, that let you build your own data analysis tools. You shuttle all your data onto Keen’s online service and then, through simple API calls to the service, your software can query that data, slicing and dicing it as needed.

That may sound complicated, but this is a tool for coders, not ordinary business folk. The aim is to keep things simple while still giving coders the flexibility to harness data as they see fit. “You can ask us questions with easy-to-understand, easy-to-construct, logical queries, and we’ll take care of all the hard stuff, like storing data on our servers, scaling the system up, making queries fast,” Kador explains. And, yes, coders can build slick “dashboards” that deliver results to the ordinary folk.

Building Blocks for the Future

You’ll hear a similar pitch from Google, which offers a data analysis service called Big Query, and Amazon, which offers something called Red Shift, but Keen wants to give you more control of your data. Edward Dowling, who runs a small startup called App.io that plugs into Keen for data analysis, says he was drawn to the tool because it could deal with millions upon millions of events at any given moment, but also because it could conform with his own way of doing things. “Other services follow their own forms and paradigms,” he says. “Keen does not.”

‘Other services follow their own forms and paradigms. Keen does not.’

— Edward Dowling

The larger point here is that App.io can analyze data in its own way without building a new engine from scratch. This is another trend across the web, one Heavybit is trying to harness with its communal startup space, one in which companies offer you internet services for piecing together your own online business. In technical speak, these services are APIs, but you can think of them as building blocks. Rather than erecting an entire online business from scratch, you can assemble the basic infrastructure from existing services. Amazon’s cloud provides the processing power. Keen analyzes the data. Imgix processes the images. Twilio offers the voice and text communications. And so on.

“You should only be building the part of a website that’s your core competency,” says Kador. “You should be outsourcing as much as you can.”

Five or six years ago, if you pieced together a new service with various APIs, you called it a mashup. Today, this is simply what you do when creating an online startup. And the practice will only become more prevalent in the years to come. Though it keeps one foot firmly planted in the 16th century, Keen is the future in more ways than one.

CoffeeScript

YumHacker

$
0
0

Comments:"YumHacker"

URL:http://blog.yumhacker.com/post/74733516768/yumhacker-i-built-a-social-network-for-food-and-heres


tl;dr: After I finished the 180 Websites project I wanted to build a full scale, dynamic website. I saw a need for more personalized restaurant recommendations and spent the last 80 days making YumHacker. The backend uses Rails as an api and the frontend is powered by Backbone.js. You can check out the code on GitHub.

I’ve always been fascinated by the internet’s ability to facilitate communication. When I set out to make 180 websites in 180 days, my goal was to learn to code so I could take part in creating those communication channels by making interactive websites. Toward the end of the 180 websites project I succeeded in making a few websites where people could collaboratively draw pictures, share photos, or chat.

Making these basic interactive websites felt awesome, but I wanted to do something bigger.  I wanted to build a build a full scale, dynamic website where people have a virtual identity that they can be proud of.  

Many of my friends really enjoy food — exotic, off the beaten path, authentic, soulful food.  But finding great, new restaurants to try is really hard.  When I head to the internet, I invariably end up trying a place that scores highly on Yelp.  Most of the time the food is, well, meh.  I’m rarely blown away.

For me, most of the awesome new restaurants I discover are recommendations from friends. I created YumHacker to help you to find great restaurants from people you trust, your friends.

There are lots of online tools to help you find new restaurants.  Most food websites are based around the reviews and recommendations of strangers. This is how Yelp operates.  Yelp is pretty amazing because they have all the data: pictures, hours, locations, reviews and star ratings.  Having all of the data is very valuable because it empowers you to make an informed decision. But at the same time, looking at 100 reviews made by strangers requires me to do a lot of work to figure out if the restaurant is a good fit for me.

Foursquare has a really nice balance between tons of data but with a personal touch. They even have the ability to find restaurants near you where your Foursquare friends have checked-in.  If your friends go to a restaurant then the chance of you wanting to go there too goes way up.  Unfortunately, it can be difficult to tell whether your friends love, hate or are indifferent toward a place they’ve checked-in to.

In real life, when someone gives you a recommendation, you get the name of the place and short comment on why they like it.  YumHacker mirrors those real life recommendations online so you can get them any time you want. YumHacker is a place where you can find great restaurants endorsed by people you trust.

YumHacker keeps it simple. Endorse restaurants you like and follow people you trust to see what restaurants they endorse. A helpful map shows you all of the places you’re endorsing, and its also possible to have the map display only endorsed restaurants from your friends. Geolocation features make it easy to find new places nearby. Short, 100 character comments put the focus on what matters and leave no room for bloat. You can upload photos, too, if that’s your jam.

After thinking about the structure for YumHacker, I knew I wanted it to be snappy. I consulted with some friends and decided to make YumHacker a one page app using Rails as an API and Backbone on the frontend. On the backend, Rails responds to all initial requests with the javascript assets and a barebones html document so Backbone has somewhere to work its magic. The Rails API then manages all subsequent requests, responding with JSON. For the restaurant search feature, YumHacker taps into the power of the Google Places API to find new restaurant information like price point and hours. For the database, YumHacker uses Postgres with PostGIS. For authentication, YumHacker uses Devise with Omniauth Facebook and Twitter integration.

My first experiences with Backbone were frustrating because Backbone nudges you in the right direction and that wasn’t the direction I was initially headed in. But after I got into the swing of things it was awesome! Having a way to organize and model data made building complex view structures a breeze. If you’re curious, all of the code for YumHacker is publicly available on GitHub so check it out!

This is the first public release of YumHacker and there are still lots of things to do. I’m going to add more advanced filters so you can refine your search by price, category and neighborhood. Coding a mobile optimized front end is also at the top of my priority list. I’m also thinking about adding the ability to make custom lists like ‘My Favorite Burritos’ that you can share. Of course, I’d love to hear any feedback or suggestions for features you’d like to see on YumHacker!

Medium, Evan Williams’s Post-Twitter Media Startup, Raises $25 Million Round | Re/code

$
0
0

Comments:"Medium, Evan Williams’s Post-Twitter Media Startup, Raises $25 Million Round | Re/code"

URL:http://recode.net/2014/01/28/medium-evan-williams-post-twitter-media-startup-raises-25-million-round/?utm_source=appnet


Medium is about to become large.

The collaborative publishing startup has just closed a $25 million round of financing, the company confirmed, marking its first major funding raise since it launched a little more than a year ago.

Among the multiple parties involved in the round are Google Ventures (courtesy of general partner Kevin Rose), SV Angel’s Ron Conway and a number of other investors, such as Chris Sacca and Peter Chernin.

And more: Tim O’Reilly, Michael Ovitz, Gary Vaynerchuk, Betaworks, Code Advisors, CAA Ventures and Science.

The single largest contribution to the round, however, comes from Greylock Partners, the Silicon Valley firm which has previously invested in companies like Facebook, LinkedIn and Tumblr. So large, in fact, that general partners David Sze and Josh Elman will join CEO Evan Williams on Medium’s board.

This also marks the first time that Medium — a blogging product that sits somewhere in between Williams’s previous companies Twitter and Blogger — has taken capital from outside venture firms since its founding.

Previous to this raise, essentially all of the money invested in Medium was from Williams himself via his incubator and investment vehicle, The Obvious Corporation.

That made a certain amount of sense. Williams, of course, is a co-founder and former CEO of Twitter, and officially became a billionaire when the microblogging service went public last year.

Which raises the question: Why is Williams taking outside capital at all?

Williams, in an interview earlier this week, cited a few reasons: As Medium scales, taking money from multiple investors is a signal of long-term thinking and diversification to the company’s employees; and the more parties that have a stake in Medium outside of Williams, the more they have a stake in the company’s success.

Williams also specifically picked Sze and Elman for board seats for different reasons. Sze has a good investment track record, having sat as an observer on Facebook’s board of directors, and he is a current director on LinkedIn’s board. Elman and Williams go back to their days working together at Twitter, where Elman was a product manager on the company’s growth team.

And lastly, Williams can tap into the networks that outside investors bring with them — often something he doesn’t have time for while working on product and running his company full time.

Medium has gone through its fair share of criticism in its first year. Though Williams has long said the company is a “new place on the Internet where people share ideas and stories that are longer than 140 characters and not just for friends,” outsiders still haven’t been able to grasp just exactly what that means. Moreover, due to its early invite-only status and high-minded design principles, many viewed it as a space for high-quality content only.

Williams maintains that’s not true. He’s said he has taken a come-one, come-all approach to content on the platform, not barring any one particular form of content over another. “We welcome all the so-called ‘crap’ as well as the longer essays. But Medium isn’t just a long-form platform,” he said in an interview.

Ideally, Williams envisions Medium much like a magazine creative director, inviting the types of items that may show up in a magazine, from features to top-ten lists to cartoons to even video. Williams has also taken to hiring editors for different sections of Medium, though a Medium “editor” isn’t like a traditional magazine editor; section editors essentially work in talent discovery and story development, finding talented writers, inviting them to the platform and working on ideas with them to fully flesh out their stories.

Medium pays some of its writers, but more to spur the network’s creativity and invite others — who aren’t necessarily professional writers — to use the platform as an arena for self expression.

The biggest change over the past few months is the departure of colleagues Biz Stone and Jason Goldman, both of whom worked with Williams for years at Twitter and, before that, Blogger. Stone and Goldman are no longer involved in the day-to-day aspects of Medium, said Williams, and haven’t been for some time as they have focused on their own startups.

Williams said this is amicable. While the three founded and built Medium inside its Obvious Corporation incubator, Stone and Goldman have also founded their own pet projects over time. Stone, of course, recently launched the mobile Q&A app Jelly, while Goldman has been involved in development at Branch, a startup that recently sold to Facebook.

Williams said the company plans to use its new capital on general expansion, including a move to a larger space on San Francisco’s Market Street, as well as continued hiring and infrastructure scaling.

(Re/code also did a long interview with Williams that is forthcoming.)

37.804372-122.270803

Untitled

$
0
0

Comments:"Untitled"

URL:http://www.scribd.com/vacuum?url=http://safehammad.com/downloads/python-idioms-2014-01-16.pdf


“Movie lovers have Netflix, music lovers have Spotify — and book lovers (whether they read literary fiction or best-selling potboilers) now have Scribd.”– NPR

“[Scribd] is a place where you can browse and skim and read whatever strikes your fancy…”– Wired

“For less than the price of buying one new book a month (e- or otherwise), you can wander through more than 50,000 books.”– Entrepreneur

“This has got to be the next best thing to sliced bread. I can finish reading one book and go grab another instantly”– Wendy Brooks, a Scribd reader


Message from Mexico: U.S. Is Polluting Water It May Someday Need to Drink - ProPublica

$
0
0

Comments:"Message from Mexico: U.S. Is Polluting Water It May Someday Need to Drink - ProPublica"

URL:http://www.propublica.org/article/message-from-mexico-u.s.-is-polluting-water-it-may-someday-need-to-drink


Mexico City is planning to draw drinking water from a mile-deep aquifer, challenging U.S. policy that water far underground can be intentionally polluted because it will never be used.

Mexico City's mayor and general director of the country's National Water Commission watch as a geologist takes a drink of water from an exploratory well into an aquifer underneath Mexico City, on Jan. 23, 2013. (Dario Lopez-Mills/AP Photo)

Mexico City plans to draw drinking water from a mile-deep aquifer, according to a report in the Los Angeles Times. The Mexican effort challenges a key tenet of U.S. clean water policy: that water far underground can be intentionally polluted because it will never be used.

U.S. environmental regulators have long assumed that reservoirs located thousands of feet underground will be too expensive to tap. So even as population increases, temperatures rise, and traditional water supplies dry up, American scientists and policy-makers often exempt these deep aquifers from clean water protections and allow energy and mining companies to inject pollutants directly into them.

As ProPublica has reported in an ongoing investigation about America's management of its underground water, the U.S. Environmental Protection Agency has issued more than 1,500 permits for companies to pollute such aquifers in some of the driest regions. Frequently, the reason was that the water lies too deep to be worth protecting.

But Mexico City's plans to tap its newly discovered aquifer suggest that America is poisoning wells it might need in the future.

Indeed, by the standard often applied in the U.S., American regulators could have allowed companies to pump pollutants into the aquifer beneath Mexico City.

For example, in eastern Wyoming, an analysis showed that it would cost half a million dollars to construct a water well into deep, but high-quality aquifer reserves. That, plus an untested assumption that all the deep layers below it could only contain poor-quality water, led regulators to allow a uranium mine to inject more than 200,000 gallons of toxic and radioactive waste every day into the underground reservoirs.

But south of the border, worsening water shortages have forced authorities to look ever deeper for drinking water.

Today in Mexico City, the world's third-largest metropolis, the depletion of shallow reservoirs is causing the ground to sink in, iconic buildings to teeter, and underground infrastructure to crumble. The discovery of the previously unmapped deep reservoir could mean that water won't have to be rationed or piped into Mexico City from hundreds of miles away.

According to the Times report, Mexican authorities have already drilled an exploratory well into the aquifer and are working to determine the exact size of the reservoir. They are prepared to spend as much as $40 million to pump and treat the deeper water, which they say could supply some of Mexico City's 20 million people for as long as a century.

Scientists point to what's happening in Mexico City as a harbinger of a world in which people will pay more and dig deeper to tap reserves of the one natural resource human beings simply cannot survive without.

"Around the world people are increasingly doing things that 50 years ago nobody would have said they'd do," said Mike Wireman, a hydrogeologist with the EPA who also works with the World Bank on global water supply issues.

Wireman points to new research in Europe finding water reservoirs several miles beneath the surface — far deeper than even the aquifer beneath Mexico City — and says U.S. policy has been slow to adapt to this new understanding.

"Depth in and of itself does not guarantee anything — it does not guarantee you won't use it in the future, and it does not guarantee that that it is not" a source of drinking water, he said.

If Mexico City's search for water seems extreme, it is not unusual. In aquifers Denver relies on, drinking water levels have dropped more than 300 feet. Texas rationed some water use last summer in the midst of a record-breaking drought. And Nevada — realizing that the water levels in one of the nation's largest reservoirs may soon drop below the intake pipes — is building a drain hole to sap every last drop from the bottom.

"Water is limited, so they are really hustling to find other types of water," said Mark Williams, a hydrologist at the University of Colorado at Boulder. "It's kind of a grim future, there's no two ways about it."

In a parched world, Mexico City is sending a message: Deep, unknown potential sources of drinking water matter, and the U.S. pollutes them at its peril.

Follow @AbrahmL

The (Sad) State of Mobile XMPP in 2014

$
0
0

Comments:"The (Sad) State of Mobile XMPP in 2014"

URL:http://op-co.de/blog/posts/mobile_xmpp_in_2014/


In a post from 2009 I described whyXEP-0198: Stream Management is very important for mobile XMPP clients and which client and server applications support it. I have updated the post over the years with links to bug tracker issues and release notes to keep track of the (still rather sad) state of affairs. Short summary:

Servers supporting XEP-0198 with stream resumption:Prosody IM.

Clients supporting XEP-0198 with stream resumption:Gajim, yaxim.

Today, with therelease of yaxim 0.8.7, the first mobile client actually supporting the specification is available! Withyax.im there is also a public XMPP server (based on Prosody) specifically configured to easily integrate with yaxim.

Now is a good moment to recapitulate what we can get from this combination, and where the (mobile) XMPP community should be heading next.

So I have XEP-0198, am I good now?

Unfortunately, it is stillnot that easy. With XEP-0198, you can resume the previous session within some minutes after losing the TCP connection. While you are gone, the server will continue to display you as "online" to your contacts, because the session resumption is transparent to all parties.

However, if you have been gone for too long, it is better to inform your contacts about your absence by showing you as "offline". This is accomplished by destroying your session, making a later resumption impossible. It is a matter of server configuration how much time passes until that happens, and it is an important configuration trade-off. The longer you appear as "online" while actually being gone, the more frustration might accumulate in your buddy about your lack of reaction – on the other hand, if the session is terminated too early and your client reconnects right after that, all the state is gone!

Now what exactly happens to messages sent to you when the server destroys the session? In prosody, all messages pending since you disconnected are destroyed and error responses are sent back. This is perfectly legal as of XEP-0198, but a better solution would be to store them offline for later transmission.

However, offline storage is only useful if you are not connected with a different client at the same time. If you are, should the server redirect the messages to the other client? What if it already got them by means ofcarbon copies? How is your now-offline mobile client going to see that it missed something?

Even though XEP-0198 is a great step towards mobile messaging reliability, additional mechanisms need to be implemented to make XMPP really ready for mass-market usage (and users).

Entering Coverage Gaps

With XEP-0280: Message Carbons, all messages you send and receive on your desktop are automatically also copied to your mobile client, if it is online at that time. If you have a client like yaxim, that tries to stay online all the time and uses XEP-0198 to resume as fast as possible (on a typical 3G/WiFi change, this takes less than five seconds), you can have a completely synchronized message log on desktop and mobile.

However, if your smartphone is out of coverage for more than some minutes, the XEP-0198 session is destroyed, no message carbons are sent, and further messages are redirected to your desktop instead. When the mobile client finally reconnects, all it receives is suspicious silence.

XMPP was not designed for modern-day smartphone-based instant messaging. However, it is the best tool we have to counter the proprietary silo-based IM contenders like WhatsApp, Facebook Chat or Google Hangouts.

Therefore, we need to seek ways to provide the same (or a better) level of usability, without sacrificing the benefits of federation and open standards.

Message Synchronization

With XEP-0136: Message Archiving there is an arcane, properly over-engineered draft standard to allow a client to fetch collections of chat messages using a kind of version control system.

An easier, more modern approach is presented inXEP-0313: Message Archive Management (MAM). With MAM, it is much easier to synchronize the message log between a client and a server, as the server extends all messages sent to the client with an <archived> tag and an ID. Later it is easily possible to obtain all messages that arrived since then by issuing a query with the last known archive ID.

Now it is up to the client implementors to add support for MAM! So far, it has been implemented in the web-based OTalk client, more are to come probably.

End-to-End Encryption

In the light oflast year's revelations, it should be clear to everybody that end-to-end encryption is an essential part of any modern IM suite. Unfortunately, XMPP is not there yet. TheXMPP Ubiquitous Encryption Manifesto is a step into the right direction, enforcing encryption of client-to-server connections as well as server-to-server connections. However, more needs to be done to protect against malicious server operators and sniffing of direct client-to-client transmissions.

There is Off-The Record Messaging (OTR), which provides strong encryption for chat conversations, and at the same time ensures (cryptographic) deniability. Unfortunately, cryptographic deniability provably does notsave your ass. The only conclusion from that debacle can be: do not save any logs. This imposes a strong conflict of interest on Android, where the doctrine is: save everything to SQLite in case theOOM killer comes after you.

The other issue with OTR over XMPP (which some claim is solved in protocol version 3) is managing multiple (parallel) logins. OTR needs to keep the state of a conversation (encryption keys, initialization vectors and the like). If your chat buddy suddenly changes from a mobile device to the PC, the OTR state machine is confused, because that PC does not know the latest state. The result is, your conversation degrades into a bidirectional flood of "Can't decrypt this" error messages. This can be solved by storing the OTR state per resource (a resource is the unique identifier for each client you use with your account). This fix must be incorporated into all clients, and such things tend to take time. Ask me aboutadding OTR to yaxim next year.

Oh, by the way. OTR also does not mix well with archiving or carbons!

There is of course also PGP, which also provides end-to-end encryption, but requires you to store your key on a mobile device (or have a separate key for it). PGP can be combined with all kinds of archiving/copying mechanisms, and you could even store the encrypted messages on your device, requiring an unlock whenever you open the conversation. But PGP is rather heavy-weight, and there is no easy key exchange mechanism (OTR excels here with theSocialist Millionaire approach).

And then there are lolcats. The Internet was made for them. But the XMPP community kind-of missed the trend. There isXEP-0096: File Transfer andXEP-0166: Jingle to negotiate a data transmission between two clients. Both protocols allow to negotiatein-band orproxy-based data transfers without encryption. "In-band" means that your multimedia file is split into handy chunks of at most 64 kilobytes each, base64-encoded, and sent via your server (and your buddy's server), causing some significant processing overhead and possibly triggering rate limiting on the server. However, if you trust your server administrator(s), this is the most secure way to transmit a file in a standards-compliant way.

You could use PGP to manually encrypt the file, send it using one of the mentioned mechanisms, and let your buddy manually decrypt the file. Besides the usability implications (nobody will use this!), it is a great and secure approach.

But usability is a killer, and so of course there are some proposals for encrypted end-to-end communication.

WebRTC

The browser developers did it right with WebRTC. You can have an end-to-end encrypted video conference between two friggin' browsers! This must have rang some bells, and JSON is cool, so there was aproposal to stack JSON ontop of XMPP for end-to-end encryption. Obviously because security is not complicated enough on its own.

XMPP Extensions Graveyard

Then there are ESessions, a deferred XEP from 2007, andJingle-XTLS, which didn't even make it into an XEP, but looks promising otherwise. Maybe somebody should implement it, just to see if it works.

Custom Extensions

In the OTR specification v3, there is an additional mechanism to exchange a key for symmetric data encryption. This can be used to encrypt a file transmission or stream, in a non-standard way.

This is leveraged byCryptoCat, which is known for itssecurity. CryptoCat is splitting the file into chunks of 64511 bytes (I am sure this is completely reasonable for an algorithm working on 16-byte blocks, so it needs to be applied 4031.9375 times), with the intention to fit them into 64KB transmission units for in-band transmission. AES256 is used inCTR mode and the transmissions are secured by HMAC-SHA512.

In ChatSecure, the OTR key exchange is leveraged even further, stacking HTTP on top of OTR on top of XMPP messages (on top of TLS on top of TCP). This might allow for fast results and a high degree of (library) code reuse, but it makes the protocol hard to understand, and in-the-wild debugging even harder.

A completely different path is taken byJitsi, where Jingle VoIP sessions are protected using the Zimmerman RTP encryption scheme. Unfortunately, this mechanism does not transfer to file exchange.

And then iOS...

All the above only works on devices where you can keep a permanent connection to an XMPP server. Unfortunately, there is a huge walled garden full of devices that fail this simple task. On Apple iOS, background connections are killed after a short time, the app developer is "encouraged" to useApple's Push Service instead to notify the user of incoming chat messages.

This feature is so bizarre, you can not even count on the OS to launch your app if a "ping" message is received, you need to send all the content you want displayed in the user notification as part of the push payload. That means that as an iOS IM app author you have the choice between sacrificing privacy (clear-text chat messages sent to Apple's "cloud") or usability (display the user an opaque message in the kind of "Somebody sent you a message with some content, tap here to open the chat app to learn more").

And to add insult to injury, this mechanism is inherently incompatible with XMPP. If you write an XMPP client, your users should have the free choice of servers. However, as a client developer you need to centrally register your app and your own server(s) for Apple's push service to work.

Therefore, the iOS XMPP clients divide into two groups. In the first group there are apps that do not use Apple Push, that maintain your privacy but silently close the connection if the phone screen is turned off or another app is opened.

In the second group, there are apps that use their own custom proxy server, to which they forward your XMPP credentials (yes, your user name and password! They better have good privacy ToS). That proxy server then connects to your XMPP server and forwards all incoming and outgoing messages between your server and the app. If the app is killed by the OS, the proxy sends notifications via Apple Push, ensuring transparent functionality. Unfortunately, your privacy falls by the wayside, leaving a trail of data both with the proxy operators and Apple.

So currently, iOS users wishing for XMPP have the choice between broken security and broken usability – well done, Apple! Fortunately, there is light at the end of the tunnel. The oncoming train is an XEP proposal for Push Notifications (slides with explanation). It aims at separating the XMPP client, server, and push service tasks. The goal is to allow an XMPP client developer to provide their own push service, which the client app can register with any XMPP server. After the client app is killed, the XMPP server will inform the push service about a new message, which in turn informs Apple's (or any other OS vendor's) cloud, which in turn sends a push message to the device, which the user then can use to re-start the app.

This chain reaction is not perfect, and it does not solve the message-content privacy issue inherent to cloud notifications, but it would be a significant step forward. Let us hope it will be specified and implemented soon!

Summary

So we have solved connection stability (except on iOS). We know how to tackle synchronization of the message backlogs between mobile and desktop clients. Client connections are encrypted using TLS in almost all cases, server-to-server connections will follow soon (GMail, I am looking at you!).

End-to-end encryption of individual messages is well-handled by OTR, once all clients switch to storing the encryption state per resource. Group chats are out of luck currently.

The next big thing is to create an XMPP standard extension for end-to-end encryption of streaming data (files and real-time), to properly evaluate its security properties, and to implement it into one, two and all the other clients. Ideally, this should also cover group chats and group file sharing (e.g. on top ofXEP-0214: File Repository and Sharing plusXEP-0329: File Information Sharing).

If we can manage that, we can also convince all the users of WhatsApp, Facebook and Google Hangouts to switch to an open protocol that is ready for the challenges of 2014.

Comments on HN

You Might Not Need jQuery

Secret - Speak Freely

code golf - Print every character your program doesn't have - Programming Puzzles & Code Golf Stack Exchange

$
0
0

Comments:"code golf - Print every character your program doesn't have - Programming Puzzles & Code Golf Stack Exchange"

URL:http://codegolf.stackexchange.com/questions/12368/print-every-character-your-program-doesnt-have


PowerShell: 96

Must be saved and run as a script.

diff([char[]](gc $MyInvocation.InvocationName))([char[]](32..126))-Pa|?{$_.SideIndicator-eq'=>'}

diff is a built-in alias for Compare-Object.

gc is a built-in alias for Get-Content.

$MyInvocation.InvocationName gets the full path to the script being executed.

32..126 is the decimal equivalent for 0x20..0x7e, and so creates an array of the decimal ASCII codes we're looking for.

[char[]] takes the contents of the next object and puts them into an array, breaking them up and converting them into ASCII characters. So, we now have two arrays of ASCII characters - one pulled from this script, the other defined by the challenge criteria.

-Pa sets Compare-Object to "Passthru" format, so only the items which are found different between the inputs are output at the console - indicators of which items were in which input are still stored in the object's data, but are not displayed.

|?{$_.SideIndicator-eq'=>'} pipes Compare-Object's output to Where-Object, which filters it down to only the items which are exclusive to the second input.

Article 42

A List Apart Pattern Library


Technology and wealth inequality - Sam Altman

$
0
0

Comments:" Technology and wealth inequality - Sam Altman "

URL:http://blog.samaltman.com/technology-and-wealth-inequality?curator=MediaREDEF


Thanks to technology, people can create more wealth now than ever before, and in twenty years they’ll be able to create more wealth than they can today.  Even though this leads to more total wealth, it skews it toward fewer people.  This disparity has probably been growing since the beginning of technology, in the broadest sense of the word.

Technology makes wealth inequality worse by giving people leverage and compounding differences in ability and amount of work.  It also often replaces human jobs with machines.  A long time ago, differences in ability and work ethic had a linear effect on wealth; now it’s exponential. [1] Technology leads to increasing wealth inequality for lots of other reasons, too—for example, it makes it much easier to reach large audiences all at once, and a great product can be sold immediately worldwide instead of in just one area.

Without intervention, technology will probably lead to an untenable disparity—so we probably need some amount of intervention.  Technology also increases the total wealth in a way that mostly benefits everyone, but at some point the disparity just feels so unfair it doesn’t matter.

Wealth inequality today in the United States is extreme and growing, and we talk about it a lot when someone throws a brick through the window of a Google bus.  Lots of smart people have already written about this, but here are two images to quickly show what the skew looks like: 


[0]

As the following table shows, wealth inequality has been growing in America for some time, not just the last few years.  It’s noticeable between the top 20% and bottom 80%, and particularly noticeable between the top 1% and bottom 99%.

And here is a graph that shows the income share of the top 1% over time:

The best thing one can probably say about this widening inequality is that it means we are making technological progress—if it were not happening, something would be going wrong with innovation.  But it’s a problem for obvious reasons (and the traditional endings to extreme wealth inequality in a society are never good).

We are becoming a nation of haves and have-nots—of prosperous San Francisco vs. bankrupt Detroit.  In San Francisco, the average house costs around $1mm.  In Detroit, the average house costs less than a Chevy Malibu made there. [2] And yet, I’d view a $1mm house in San Francisco as a better investment than 20 $50k houses in Detroit.  As the relentless march of technology continues, whole classes of jobs lost are never coming back, and cities dependent on those lost jobs are in bad shape. [3]

This widening wealth divide is happening at all levels—people, companies, and countries.  And either it will keep going, or innovation will stop.

But it feels really unfair.  People seem to be more sensitive to relative economic status than absolute.  So even if people are much better off being poor today than king 500 years ago, most people compare themselves to the richest people today, and not the richest people from the past.

And importantly, it really is unfair.  Trying to live on minimum wage in the United States is atrocious (http://www.forbes.com/sites/laurashin/2013/07/18/why-mcdonalds-employee-budget-has-everyone-up-in-arms/).  That budget, incidentally, assumes that the worker is working two jobs.  Even though they’re outputting less value, that person is certainly working harder than I am.  We should do more to help people like this.

Real minimum wage has declined, failing to track real averages wages and massively failing to track the wages of the top 1%.

In a world where ideas and networks are what matter, and manufacturing costs trend towards zero, we are going to have to get comfortable with a smaller and smaller number of people creating more and more of the wealth.   And we need a new solution for the people not creating most of the wealth—many of the minimum wage jobs are going to get innovated away anyway.

There are no obvious/easy solutions, or this would all be resolved.  I don’t have any great answers, so I’ll just throw out some thoughts.

We should assume that computers will replace effectively all manufacturing, and also most “rote work” of any kind.  So we have to figure out what humans are better at than computers.  If really great AI comes along, all bets are off, but at least for now, humans still have the market cornered on new ideas.   In an ideal world, we’d divide labor among humans and computer so that we can both focus on what we’re good at. 

There is reason to be optimistic.   When the steam engine came along, a lot of people lost their manual labor jobs.  But they found other things to do.  And when factories came along, the picture looked much worse.   And yet, again, we found new kinds of jobs.  This time around, we may see lots more programmers and startups.

Better education—in the right areas—is probably the best way to solve this.  I am skeptical of many current education startups, but I do believe this is a solvable problem.  A rapid change in what and how we teach people is critical—if everything is changing, we cannot keep the same model for education and expect it to continue to work.  If large classes of jobs get eliminated, hopefully we can teach people new skills and encourage them to do new things. 

Education, unlike a lot of other government spending, is actually an investment—we ought to get an ROI on it in terms of increased GDP (but of course it takes a long time to pay back). 

However, if we cannot find a new kind of work for billions of people, we’ll be faced with a new idle class.  The obvious conclusion is that the government will just have to give these people money, and there’s been increasing talk about a “basic income”—i.e, any adult who wanted it could have, say, $15,000 a year.

You can run the numbers in a way that sort of makes sense—if we did this for every adult in the US, it’d be about $3.5 trillion a year, or a little more than 20% of our GDP.  However, we’d knock out a lot of existing entitlement spending, maybe 10% of GDP.  And we’d probably phase it out for people making over a certain threshold, which could cut it substantially. 

There are benefits to this—we’d end up helping truly poor people more and middle class people less, and we’d presumably cut a ton of government bureaucracy.  We could perhaps end poverty overnight (although, no doubt, anything like this would cause prices to rise).  And likely most of this money would be spent, providing some boost to the economy.  We could require 10 hours a week of work for the government, or not.  A big problem with this strategy is that I don’t think it’ll do much to address the feeling of inequality.

Many people have a visceral dislike to the idea of giving away money (though I think some redistribution of wealth is required to reasonably equalize opportunity), and certainly the default worry is that people would just sit around and waste time on the Internet.  But maybe, if everyone knew they had a safety net, we’d get more startups, or more new research, or more novels.  Even if only a small percentage of people were productive, in a world where some people create 10,000x more value than others, that’d be ok.  The main point I’m trying to make is that we’re likely going to have to do something new and uncomfortable, and we should be open to any new ideas.

But this still doesn’t address the fundamental issue—I believe most people want to be productive.  And I think figuring out a much better way to teach a lot more people about technology is likely the best way to make that happen.

Thanks to Nick Sivo for reading a draft of this.

Follow me on Twitter here: http://twitter.com/sama

[0] http://www.youtube.com/watch?v=QPKKQnijnsM

[1] There are lots of other significant factors that cause wealth inequality—for example, having money makes it easier to make more money—but technology is an important and often-overlooked piece

[2] http://www.huffingtonpost.com/2012/07/20/home-cost_n_1690109.html

[3] I was recently in Detroit and was curious to see some of the neighborhoods where you can buy houses for $10-20k.  Here are some pictures:

 

Screw your standing desk, how about squatting?

$
0
0

Comments:"Screw your standing desk, how about squatting?"

URL:http://bitehype.com/screw-your-standing-desk-how-about-squatting/


When the first standing desk movement came up, I was pretty much in love. Why?

Standing at work, sometimes even walking (treadmill desks anyone?) during work – where can I sign up? But I think standing all the time isn’t optimal either and we should aim for sitting less, not standing more.

Anyone who has stood more than 5 hours (heck, beginners can’t even stand for 20min.!) knows how problematic and tiring this can be. And when you’re in pain, your performance will suffer. Also don’t forget to count in all the posture, knee and back problems a lot of people have, and just standing around won’t fix them.

So Feyyaz, what are you offering instead, you may ask? Simple, I’m offering you a solution that will not only help you work better, improve your flexibility, increase your strength but it’ll be also resting at the same time. Wonder pill? Hell yeah!

Let’s squat!

Let me introduce you to this guy:

And this woman:

 

And this working man:

 

This looks like an interesting group:

 

“So she says to me: That hat? Amaze-balls!”

What have these people all in common? No, not their asian heritage (what’s with the racism, yo?) .

All these people squat to rest! These people are tired of standing, and since they don’t have chairs with them (who needs chairs, anyway?), they squat to talk, wait or chill.

The problem with Western cultures however is: we lost this basic position. Most of us can’t squat down, because we have too tight hips, bad knees or no ankle mobility. Try it out for yourself. Try to squat down, without lifting your heels. Some might even be able to do it, but we can’t hold it, because it’s super tiring.

“You’re crazy, you’re recommending me something that’s even more tiring than standing!”

Hold ya horses there, buddy. It’ll get better, but only if you work at it.

 

Some sciencez

Let’s face it, most of us don’t use this squatting position at all, in our adult lives (except if we have a toilet situation in a third-world country.). And you’ll automatically lose the patterns and positions you don’t use. Your body’s adaptability goes both ways of course: Do a move every day and you’ll get better at it. Don’t do it, and you’ll lose it.

Your body is a machine, trimmed for efficiency. Your muscles are lazy and your neurons are even lazier. So they are looking for every way possible to not do something. The human muscles atrophy when you’re not using them. Don’t believe me? Did you ever try not walking for one week? One month? You’ll have a hard time even standing, because your body thought you won’t stand and walk anymore!

What we try in fitness in general, is use this force for us and not against us. That’s why we lift a little heavier every week, walk a little farther, run a little faster, every time we train (at least that’s what you should do). It goes even farther than this.

Ido Portal has a concept called “movement complexity” – the idea is that once you reached a certain skill/move you can take this and dig deeper. E.g. a handstand is a basic move, a one-arm handstand is a the move more complex, so the brain (and the body) has to adapt to the complexity. Our body craves complexity, but only IF you feed it to him/her.

Which brings me back to the squat. The squat is a basic resting position we should all have in our “toolbox”.

We can use it to:

- squat with weights in order to increase our strength (and in turn use this strength for other fun stuff!)

- as a basic position in floor work (e.g. Locomotion - every middle position is a squat!)

- simply to play with our children!

- rest.

Once you “have” the squat, it’s not tiring anymore. Oh and it’s way healthier for your digestive system

 

Without going into the details too much, sitting forces you to press more to handle your toilet business, opening up a lot of nasty consequences for your GI health. Trust me, you don’t want to fuck around with your GI (I call mine GI Joe for a reason).

 

“But squatting is so difficult and tiring”

I feel ya, this is why I’ll suggest a solution today. I counter the standing desk movement, by introducing you to the 30/30 Squat Challenge.

Introduced by the great Ido Portal, the idea is to squat for 30min. every days for 30 consecutive days. This will fix your squat and open up a new world for you. The idea is to start a stop watch as soon as you squat and stop when you stand back up. Try to get 30min. in 30 days and your squat, your GI health, even your posture, will improve!

 

Your squat won’t be perfect, you might not able to go all the way down, or keep a straight back, and that’s ok. The idea is that you gain back a basic position of your body and be as cute as this girl:

 

 

Work while squatting

The squat is also a great position to work in. Screw chairs and expensive standing desks, get yourself a “LACK” from IKEA and get working, be awesome like these folks:

 

(taken from here)

Or this fit gal:

 

Or this strong woman:

 

 

Squatting while working will increase your awesomeness by about 43%*

So what are you waiting for, it’s time to squat, people!

Oh and also squat while waiting for the food:

 

Still here? Not convinced yet? You need the Asian squat video:

 

 

*not confirmed

The Key to Snapchat's Profitability: It's Dirt Cheap to Run | Wired Opinion | Wired.com

$
0
0

Comments:"The Key to Snapchat's Profitability: It's Dirt Cheap to Run | Wired Opinion | Wired.com"

URL:http://www.wired.com/opinion/2014/01/secret-snapchats-monetization-success-will-surprise/


Ever since Snapchat turned down a $3 billion all-cash offer from Facebook this past November, there’s been no shortage of discussion about it and the rest of its photo-sharing-and-messaging service cohort, which includes WhatsApp, Kik, Japan-based LINE, China-based WeChat, and Korea-based Kakao Talk. Explanations for this phenomenon have ranged from the need to redefine identity in the social-mobile era to the rise of ephemeral, disposable media.

Regardless of why this trend is taking off, it’s clear that the so-called messaging “wars” are heating up. As always, the euphoria over hockey-stick user growth numbers soon gives way to the sobriety of analysis, yielding the inevitable question: Can they monetize? Snapchat, with its massive (paper) valuation is at the vanguard of such criticism, especially given the irony that the service is essentially deleting its biggest asset – its data.

So, how can Snapchat effectively monetize without its user data? By operating its service — and in particular, its infrastructure — an order of magnitude cheaper than its competitors.

Surprisingly little time has been spent examining how one can rethink a storage-centric infrastructure model for this kind of disappearing data model. Thinking about system architecture isn’t just relevant to engineers; it has important implications for helping services like Snapchat save — and therefore make— money. (By the way, that amount would need to be about $500 million revenue and $200 million profit to justify its $3 billion valuation in November.)

It’s very simple: If the appeal of services like SnapChat is in the photos (“the fuel that social networks run on”), then the costs are in operating that photo sharing-and-serving service, as well as running any monetization— such as ads — built on top of that infrastructure.

But I’d even go so far to argue that making use of advanced infrastructure protocols could let Snapchat get away with paying almost no bandwidth costs for a large subset of media. How? Well, let’s begin by comparing Snapchat’s infrastructure to that of a more traditional social network: its erstwhile suitor, Facebook.

Vijay Pandurangan is the founder and CEO of Mitro, a password manager for organizations; he also angel invests in startups. Previously, Pandurangan worked at Google designing and implementing some of the core systems’ infrastructure as well as parts of its ads system. Follow him on Twitter @vijayp.

According to publicly available data, Facebook users upload 350 million images a day. Back when users were adding 220 million photos weekly in 2009, the company was serving upwards of 550,000 images per second at peak — and they did it by storing five copies of each image, downsampled to various levels, in a photo storing-and-serving infrastructure called Haystack. (For obvious reasons, the exact architecture of these systems is not known.)

That gives you a sense of the scope of the infrastructure. But the salient detail here is the total cost of all this serving-and-storage — including all-in per-byte cost of bandwidth — which I estimate to be more than $400 million a year. [If you want the details, here’s what went into my calculation, which also includes ancillary costs such as power, capital for servers, human maintenance, and redundancy. The most important variables in this cost calculation are:

  • the number of images/videos uploaded each month (estimated at ~ 400M photos daily)
  • the size of each image/video (estimated at 3MB)
  • the average number of images/videos served each month (estimated at 9.5% of all images)
  • all-in per-byte bandwidth/serving cost (estimated at $5*10-11)
  • all-in per-byte storage cost (estimated at $5*10-11)
  • exponential growth rate coefficient (r, estimated at ~ 0.076, using Pt = P0ert).

To compare Facebook’s costs to Snapchat’s, however, we also have to include these variables: the mean number of recipients of each Snapchat message (estimated conservatively at 2.5); and the fraction of total messages that are undelivered (estimated at 10 percent).]

Obviously, we are comparing a much larger service that has advertising — Facebook — to one that is smaller and doesn’t have any advertising (yet). But this doesn’t really matter, in principle. Because even though Facebook has to make sure its infrastructure can store and serve the data needed to sell ads, the reality is that much of the information that helps advertisers target users is the metadata of user interactions — with whom, where, how, and when (as well as what they ‘like’) — as opposed to the content of what those users are actually saying.

This means that despite their differences, storing and analyzing only the metadata would still allow Snapchat to build similar profiles of its users as Facebook. Snapchat could thus sell ads that target users just as Facebook does (assuming of course that their product can attract a consistent customer base) — and with one huge advantage: lower costs, since Snapchat doesn’t need to store or serve any messages after they’ve been delivered.

This kind of approach to user targeting, with its metadata-centric infrastructure and associated cost savings — is by no means unique to Snapchat. The public revelations about NSA’s surveillance operations point to a similar architecture; storing the entire content of all intercepted communication would be prohibitive in terms of cost and space, but not so for metadata. In fact, the way the metadata is ostensibly used to target individuals and groups NSA agents deem to be a threat is not dissimilar to how advertising targeting works. But that’s another discussion.

What makes Facebook’s — and any other traditional social network’s — photo-serving costs so expensive is having to keep data in a high-availability, low-latency, redundant, multi-master data store that can withstand temporary spikes in traffic load. But much of this expense is unnecessary for storing and processing metadata. Based on some additional assumptions (such as the number of recipients of each message), we can estimate that, even if its per-byte storage costs were 5x higher, Snapchat would only need to pay $35 million a year (under 9 percent of Facebook’s total estimated infrastructure costs) to handle a similar load — all while accruing a trove of data with similar targeting value.

It’s like getting a mile when you’re only giving an inch.

So how could Snapchat reduce their bandwidth and storage costs even further? The key, again, is in the seemingly mundane: infrastructure. There are a number of complicated optimizations that could make the system even cheaper to operate. For example, Snapchats between parties that are concurrently online could be delivered via peer-to-peer messaging (think Skype). Because these messages would never even flow over Snapchat’s network, it would reduce Snapchat’s delivery costs to nearly nothing.

It’s not just theoretical. Firewalls are an impediment, of course, but a number of solutions, including proxy servers in the edge of the network, or ICE (RFC 5245) could make the above doable relatively soon. Snapchat could even store encrypted, undelivered messages on other users’ phones, ensuring availability by using erasure coding with sufficient redundancy. This technique basically involves splitting media up into many overlapping pieces (only a few of which are needed to reconstitute the entire picture); giving the data to different users (encrypted so that no one other than the recipient would be able to glean any information from it); and assuming with high probability that enough users will be online at any time to reconstruct the data.

While it’s hard to guess what fraction of messages are exchanged between parties that are online, the impact of such infrastructure design would definitely be substantial.

The fact is, this new generation of messaging services can use cost-effective infrastructure to operate so much more cheaply than the Facebooks of the world and yet still effectively target ads to users. While it would seem that not storing content would be an obstacle to monetization, that design feature turns out to be an asset when working from metadata. The question that remains isn’t how they’ll monetize; it’s whether these services can make a compelling enough product to keep users coming back for more.

We are not normal people | by @mijustin

$
0
0

Comments:"We are not normal people | by @mijustin"

URL:http://justinjackson.ca/we-are-not-normal-people/?utm_content=buffer495cb&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer


We are not normal people

Written by Justin on January 29, 2014

When it comes to building products, the biggest problem technical (and creative) people have is this:

increasing the technical challenge while creating a product does not increase the chance for more sales

This surprises us. We get an idea for a thing, think about the technology we’d use to build it, and get excited.

“I could build this on the Twilio API!” “I could learn that new CSS framework!” “I could use this new tool I just purchased!”

The problem is that all of this is focused on us, the creator, and not on the customer, the consumer.

Repeat after me:

“We are not normal people.”

Say it again:

“I am not a normal person.”

We’re not. What’s “normal” for us is often alien to our customers. If we’re actually going to sell products, we need to quit thinking about what’s cool to us, and focus on what customers actually need.

Here’s a lesson I learned the hard way: the best way to do this is to listen.

Let me give you an example:

I was walking to my barber for a haircut, thinking about all the ways technology could improve my barber’s business. “Software is eating the world!” I thought. As I walked, I began to create software (in my mind) that would eliminate perceived inefficiencies, save him hundreds of dollars a month, and increase sales exponentially.

Then I go in, got my haircut, and got a reality check.

Me: “So, have you ever tried using scheduling software for your appointments?”

Barber: “Oh man, I’ve tried like 10 of them. Terrible! They’re all terrible.”

Me: “Really? None were helpful. Why?”

Barber: “Almost all my bookings happen on the phone, or via text message. There’s nothing I’ve found that’s more efficient than looking at a paper calendar on the wall, and finding them a time. If I have to walk over to the computer, I’ve already wasted too much time. I have 5 seconds to look, and determine when is have a spare block. All the software I’ve tried just gets in the way.”

All the plans in my head, for incredible barbering software, were crushed, in a single conversation.

This is the power of getting out and actually listening to people.

Sidebar: there’s a temptation to try to change people’s priorities so they fit our ideal. For example, I could have argued with my barber that a paper calendar is a terribly inefficient way to organize his business. This is almost always a bad idea. First: he knows his business way better than I do. Second: trying to change people’s priorities is almost never profitable. The amount of energy, time and dollars required makes it a losing proposition.

Here’s the hard part about building, and marketing, products: we have to commit ourselves to the best solution for the customer EVEN when it’s not the most challenging thing to build. Here’s a scary thought: in some cases, a customer might not NEED more software!

If we’re really going to help people, and we’re really going to improve their lives, we have to be open to all possible solutions.

  • Sometimes the best solution for a customer will be to write a book.
  • Sometimes, yes, they’ll need good, simple software that solves their problem.
  • And sometimes, like my barber, maybe what they really need is a better paper calendar, that helps them book appointments more efficiently.

Really, we won’t know until we listen. If you want to get good at marketing and sales, you’re going to need to get good at really listening. Throw away your preconceived notions, and open your ears to what your target market has to say.

You can do this in direct conversation, like I had with my barber. However, it’s also helpful to go to places where you can be a silent observer.

Luckily, the Internet has lots of places like this, especially if your target audience is online. Go to forums,subreddits, Facebook groups, and Quora and listen to what people talk about.

Here’s what you’re looking for: what are people always complaining about? What pain gets brought up over and over again? (Hat tip to Patio11, Derek Sivers, Hiten Shah and Amy Hoy for teaching this to me originally)

I’ve always hung out with developers. Although I don’t write a lot of code, I like working with them. In my day job as a Product Manager, I partner with them every day. In my spare time, I hang out with them on forums like Hacker News, Slashdot, and JFDI. And in my hometown, some of my best friends are engineers. We go out for beer, have lunch, and play volleyball together.

When I hang out with my developer friends, I ask questions and I listen. Here’s a pattern I started to see: developers have the amazing ability to build things, but they’re intimidated by marketing. It confuses them. They don’t know where to start. Here’s a few quotes I’ve collected:

“Like a lot of programmers, I used to view marketing and sales as something that was scummy and below me. It amounted to essentially tricking people into giving you their money and they didn’t get much in return. It wasn’t until I became a salesman that my view on sales and marketing completely changed.” “I am an engineer and product developer by trade. However, sales and advertising are much tougher for me. What works? Social media? Google? Bloggers?”

A developer who knows how to code and market a product is basically unstoppable. I want to help my developer friends to be unstoppable: to build, market, and sell their own software.

So, I’ve decided to help. I’ve started writing a new book, tentatively titled Marketing for Developers. If you’d like updates on my progress, and a free teaser chapter, you can sign up here.

Here’s your homework for this week: I want you to go out, and listen. Leave your ideas at the door. Just ask questions, observe, and record the trends that you see.

Cheers,
Justin Jackson
@mijustin

PS: not a developer? (Or are you a developer that wants to get going right now?) My free course on building an email list is a great place to start.

PPS: My wife reminds me that “I am not normal” all the time. Here’s a good example (this was not halloween):

Untitled

$
0
0

Comments:"Untitled"

URL:http://www.scribd.com/vacuum?url=http://www.hscic.gov.uk/media/12443/data-linkage-service-charges-2013-2014-updated/pdf/dles_service_charges__2013_14_V10_050913.pdf


“Movie lovers have Netflix, music lovers have Spotify — and book lovers (whether they read literary fiction or best-selling potboilers) now have Scribd.”– NPR

“[Scribd] is a place where you can browse and skim and read whatever strikes your fancy…”– Wired

“For less than the price of buying one new book a month (e- or otherwise), you can wander through more than 50,000 books.”– Entrepreneur

“This has got to be the next best thing to sliced bread. I can finish reading one book and go grab another instantly”– Wendy Brooks, a Scribd reader

Viewing all 9433 articles
Browse latest View live