Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Dad gets OfficeMax mail addressed 'Daughter Killed in Car Crash' - latimes.com

$
0
0

Comments:"Dad gets OfficeMax mail addressed 'Daughter Killed in Car Crash' - latimes.com"

URL:http://www.latimes.com/nation/nationnow/la-na-nn-officemax-mail-20140119,0,6457094.story#axzz2qubhydAY


An off-and-on customer of OfficeMax, Mike Seay has gotten the office supply company's junk mail for years. But the mail that the grieving Lindenhurst, Ill., father said he got from OfficeMax last week was different.

It was addressed to "Mike Seay, Daughter Killed in Car Crash."

Strange as that sounds, the mail reached the right guy. Seay's daughter Ashley, 17, was killed in a car crash with her boyfriend last year. OfficeMax somehow knew.

And in a world where bits of personal data are mined from customers and silently sold off and shuffled among corporations, Seay appears to be the victim of some marketing gone horribly wrong.

"I’m not a big OfficeMax customer. And I wouldn’t have gone there and said anything to anybody there about it [the car crash]. That’s not their business," Seay, 46, told the Los Angeles Times in a phone interview Sunday.

In a statement, OfficeMax said the mailing "is a result of a mailing list rented through a third-party provider" and offered its apologies to Seay. A spokeswoman told The Times on Sunday that the company was still gathering information about what had happened.

The company, however, had not personally called Seay to apologize, Seay told The Times, and he has been worried about the company's behavior since he and his wife received the letter Thursday.

The letter seems to be some kind of discount offering, Seay said.

Seay said that he called an OfficeMax number Friday and that a manager at a call center refused to believe he'd been sent the letter addressed that way.

Then, he said, a spokeswoman for OfficeMax "acted the same way" shortly before he was interviewed by NBC-5 reporter Nesita Kwan on Friday. (Kwan told The Times she couldn't comment until she received approval from her supervisors.) The spokeswoman was more conciliatory after she received a photo of the envelope, Seay said.

Seay, who is unemployed, said that he isn't interested in suing OfficeMax, but that since his wife was "traumatized" by the letter, he wants an apology from the company's chief executive.

He also wants to know how OfficeMax got the information. The last thing Seay remembers buying at OfficeMax since his daughter's death last February is some paper.

"Why do they have that?" Seay said of the information about his daughter's death. "What do they need that for? How she died, when she died? It’s not really personal, but looking at them, it is. That’s not something they would ever need."

The nation has recently been riveted by the debate over how Americans' personal data is gathered by government agencies, and corporate data-mining has drawn concern as well.

Retail giant Target reportedly knows how to use its data to identify pregnant customers, and it recently lost tens of millions of customers' credit and debit card information to hackers, among other data. 

Gatherers of consumer data also are reportedly selling off lists of rape victims and AIDS and HIV patients, a privacy group told Congress in December.

OfficeMax has not identified the company whose mailing list it used to send the letter to Seay.

ALSO:

Five years later, the Miracle on the Hudson still awes aviation

Roswell school shooting suspect, 12, may have warned of attack online

Oklahoma same-sex marriage ban struck down by federal judge


Why 3D doesn't work and never will. Case closed. | Roger Ebert's Journal | Roger Ebert

$
0
0

Comments:"Why 3D doesn't work and never will. Case closed. | Roger Ebert's Journal | Roger Ebert"

URL:http://www.rogerebert.com/rogers-journal/why-3d-doesnt-work-and-never-will-case-closed


Why 3D doesn't work and never will. Case closed. by Roger Ebert January 23, 2011   |   Print Page Tweet

I received a letter that ends, as far as I am concerned, the discussion about 3D. It doesn't work with our brains and it never will.

The notion that we are asked to pay a premium to witness an inferior and inherently brain-confusing image is outrageous. The case is closed.

This letter is from Walter Murch, seen at left, the most respected film editor and sound designer in the modern cinema. As a editor, he must be intimately expert with how an image interacts with the audience's eyes. He won an Academy Award in 1979 for his work on "Apocalypse Now," whose sound was a crucial aspect of its effect.

Wikipedia writes: "Murch is widely acknowledged as the person who coined the term Sound Designer, and along with colleagues developed the current standard film sound format, the 5.1 channel array, helping to elevate the art and impact of film sound to a new level. "Apocalypse Now" was the first multi-channel film to be mixed using a computerized mixing board." He won two more Oscars for the editing and sound mixing of "The English Patient."

"He is perhaps the only film editor in history," the Wikipedia entry observes, "to have received Academy nominations for films edited on four different systems:

• "Julia" (1977) using upright Moviola • "Apocalypse Now" (1979), "Ghost" (1990), and "The Godfather, Part III" (1990) using KEM flatbed • "The English Patient" (1996) using Avid. •  "Cold Mountain" (2003) using Final Cut Pro on an off-the shelf PowerMac G4.

Now read what Walter Murch says about 3D:

Hello Roger,

I read your review of "The Green Hornet" and though I haven't seen the film, I agree with your comments about 3D.

The 3D image is dark, as you mentioned (about a camera stop darker) and small. Somehow the glasses "gather in" the image -- even on a huge Imax screen -- and make it seem half the scope of the same image when looked at without the glasses.

I edited one 3D film back in the 1980's -- "Captain Eo" -- and also noticed that horizontal movement will strobe much sooner in 3D than it does in 2D. This was true then, and it is still true now. It has something to do with the amount of brain power dedicated to studying the edges of things. The more conscious we are of edges, the earlier strobing kicks in.  

The biggest problem with 3D, though, is the "convergence/focus" issue. A couple of the other issues -- darkness and "smallness" -- are at least theoretically solvable. But the deeper problem is that the audience must focus their eyes at the plane of the screen -- say it is 80 feet away. This is constant no matter what.

But their eyes must converge at perhaps 10 feet away, then 60 feet, then 120 feet, and so on, depending on what the illusion is. So 3D films require us to focus at one distance and converge at another. And 600 million years of evolution has never presented this problem before. All living things with eyes have always focussed and converged at the same point.

If we look at the salt shaker on the table, close to us, we focus at six feet and our eyeballs converge (tilt in) at six feet. Imagine the base of a triangle between your eyes and the apex of the triangle resting on the thing you are looking at. But then look out the window and you focus at sixty feet and converge also at sixty feet. That imaginary triangle has now "opened up" so that your lines of sight are almost -- almost -- parallel to each other.

We can do this. 3D films would not work if we couldn't. But it is like tapping your head and rubbing your stomach at the same time, difficult. So the "CPU" of our perceptual brain has to work extra hard, which is why after 20 minutes or so many people get headaches. They are doing something that 600 million years of evolution never prepared them for. This is a deep problem, which no amount of technical tweaking can fix. Nothing will fix it short of producing true "holographic" images.

Consequently, the editing of 3D films cannot be as rapid as for 2D films, because of this shifting of convergence: it takes a number of milliseconds for the brain/eye to "get" what the space of each shot is and adjust.

And lastly, the question of immersion. 3D films remind the audience that they are in a certain "perspective" relationship to the image. It is almost a Brechtian trick. Whereas if the film story has really gripped an audience they are "in" the picture in a kind of dreamlike "spaceless" space. So a good story will give you more dimensionality than you can ever cope with.

So: dark, small, stroby, headache inducing, alienating. And expensive. The question is: how long will it take people to realize and get fed up?

All best wishes,

Walter Murch.  

Salt shaker and landscape Photoshops by Marie Haws.              

   

Next Article: After 3D, here is the future of filmPrevious Article: Leading with my chin

Popular Blog Posts

Far Flunger Jana Monji looks at "47 Ronin" and sees a movie that fetishizes "Asian-ness" but muddles important detail...

Please enable JavaScript to view the comments powered by Disqus.comments powered by

Linode Blog » An old system and a SWAT team

$
0
0

Comments:"Linode Blog » An old system and a SWAT team"

URL:https://blog.linode.com/2014/01/19/an-old-system-and-a-swat-team/


January 19, 2014 8:01 pm

This is what Linode employees along with the fine men and women from the Galloway police department had to deal with this afternoon – their SWAT team storming the Linode office, forcing everyone out for about an hour while they performed a sweep of the building room to room, complete with an explosives-sniffing dog (who was very happy). They had received a false report which provoked them to respond in this manner – and it’s their job, after all, to respond to reports, even if it turns out to be a hoax. They were great, and I thank them.

Not so coincidentally, about the same time we were made aware that an old personal server had a database accessed using old forum credentials obtained from the incident last year. This server is not under the umbrella of our security team because this server plays no role in Linode infrastructure. Unfortunately, it did have a restore of the phpBB forum database on it from 2010-03-03. Forum users that existed at that time and who haven’t changed their credentials since have had them revoked and will need to reset them. We regret that this happened and apologize for the oversight. We will be discussing new security policies to address scenarios like this.

On the subject of security, last year we stopped all other developments and focused on nothing but security for over six months. We did everything we could think of, from significantly reducing our Internet-facing footprint, to defining, testing, or improving practices and policies for going forward, to third-party penetration testing. We did this until we ran out of things to fix and ran out of ideas to pursue, and our security team continues to proactively assess our infrastructure and services. This was a monumental effort and a story that deserves to be told, but these efforts and their outcomes belong in a post of their own. Stay tuned.

We know how important transparency is and how we’ve needed to do a better job with it in the past, and well … this is the story.

As always, if you have any questions please feel free to contact us.

-Chris

Christopher S. Aker
Linode founder & CEO

Filed under: announcements by caker

Nissan Sells 100,000 LEAFs, Captures 48% Of Worldwide Market To Date

$
0
0

Comments:" Nissan Sells 100,000 LEAFs, Captures 48% Of Worldwide Market To Date"

URL:http://insideevs.com/nissan-sells-100000-leafs-captures-48-of-worldwide-market-to-date/


This past weekend Nissan announced that after just more than 3 years on the market, the LEAF had reached the 100,000 vehicles sold level.

More impressive still is that InsideEVs calculated Nissan had only sold about 88,000 through the end of November, and the company themselves shortly thereafter announced reaching the 92,000 LEAF sold mark in early December - meaning the company has sold about 12,000 copies in the past two months worldwide.

Overall, Nissan has captured 48% of pure electric sales worldwide since 2010 according to the company.

Naturally as this is a major milestone (the first all electric car to reach the six figure sales level), so Nissan has a cute way of celebrating – by doing a featurette on the 99,999th customer (that press release below)

As mentioned above this is a pretty big milestone for Nissan;  look for the company to celebrate again with owner 100k internationally shortly…which we will cover as well.

NASHVILLE, Tenn. – Amy Eichenberger of Charlottesville, Va., became the 99,999th global Nissan LEAF customer when she purchased her 100 percent electric vehicle (EV) at Colonial Nissan.

Amy, a 47-year-old mother of two, wasn’t even in the market for a new car. Then she spotted a University of Virginia colleague’s Nissan LEAF and decided she wanted to know more about the “modern-looking, futuristic and progressive” car.

“As an architect, the style first got my attention, and I loved the concept of zero emissions,” Amy said. Amy is a project manager overseeing major capital investments for the University of Virginia in Charlottesville.

Nissan LEAF was the first car Amy test drove, and she loved the zip it had. A Mercedes driver for 10 years, Amy describes herself as “picky.” Quality, safety, a “glide ride” and reliability were at the top of Amy’s auto shopping list criteria.

She said she had a few initial reservations—primarily around range—so she tested out some gasoline and diesel competitors as well. “I’d been told once I drove a Mercedes I’d never drive anything else again. I don’t need fancy, but I do appreciate the solid feel and craftsmanship of a luxury vehicle, and I get that in the LEAF,” Amy said.

“The general fuel economy out there is unimpressive and many of them felt tin-canny. I didn’t even want to look at anything in the 20 MPG range. I considered the VW Jetta TDI, Toyota Prius, Honda CRV and a couple of Subaru wagons, and I always came back to the Nissan LEAF. Everything else seemed stuck in the past,” Amy explained. 

Amy ultimately chose a LEAF S in Glacier White. Her commute is about 10 miles to the university each day and most of her errand-running is around the city—well under the LEAF’s estimated range of 84 miles on a full charge. 

“I have friends I like to visit in Richmond, which I can do in the LEAF with some planning, and in DC, which I’ll do in my son’s or boyfriend’s car. LEAF will meet my needs 98 percent of the time, and I didn’t want to let a little range anxiety prevent me from missing out on what I consider a much more progressive and forward-thinking vehicle than any of the alternatives.”

Chris Crowley, the dedicated EV salesperson for Colonial Nissan, sold Amy her LEAF. He explained that LEAF buyers are not typical walk-ins. “LEAF buyers generally come in well educated about the vehicle, looking for even more information and wanting to see how it feels and drives. We spend a lot of time talking about driving habits to make sure it meets their needs and reviewing how very much it’s like any other vehicle in its capabilities with the added benefit of no fuel bill. Folks like to be green, but you can talk to their pocket books as well,” Chris said.

Chris has been with Colonial Nissan for two years and has been the lead EV person for most of that time. He’s sold nine LEAFs total with three of those coming in the past three weeks. “LEAF sales have picked up because once we were selling to engineers who were fans of the car and knew exactly how it worked. Now we’re selling to a much broader audience, and I think we’ve benefitted from a few folks who resolved to be greener in the new year.”

Nissan LEAF launched in the United States in December 2010. The United States accounts for nearly half of the sales worldwide. The pace of LEAF sales has continued to accelerate. In 2013, Nissan sold 22,610 of the electric vehicles in the United States, more than twice as many as in 2012 and more than 2012 and 2011 Leaf sales combined.

Nissan LEAF traditionally has performed well on the West Coast with notable markets such as San Francisco, Los Angeles and Seattle, but now interest has expanded across the country. New hot markets have emerged such as Atlanta, which has been the No. 1 LEAF market for the past five months.

“With LEAF, we see a high level of organic growth and viral sales where LEAF owners become our best evangelists and salespeople. With electric vehicles, many folks presume a 100 percent electric vehicle won’t meet their needs until they chat with a neighbor, co-worker or family friend who loves their LEAF and explain its practicality, and then it goes on their consideration list,” said Erik Gottfried, Nissan’s director of EV Sales and Marketing. “In fact, we’re seeing similar results with the geographic dispersion of sales. With sales high in Atlanta, we now see other Georgia markets such as Macon and Columbus picking up significant momentum, similar to Eugene, Ore., following on the success of Portland.”

Nissan LEAF is best-selling EV in history with 48 percent market share of the electric vehicle market globally. As of November 2013, Nissan LEAF drivers have completed an estimated 1 billion zero-emission kilometers, resulting in approximately 165 million kilograms of CO2 saved.

Nissan LEAF offers powerful acceleration, quiet operation, energy efficiency and low cost of maintenance. Nissan has extended the standard warranty for the battery-power holding capacity with its own additional warranty for customer satisfaction and assurance.

After leading the era of electrification in passenger vehicles with the LEAF, in 2014 Nissan will become the first to bring a mass-market all-electric light commercial vehicle to market. The e-NV200 will go on sale in Europe and Japan bringing the benefits of quiet, cost-efficient, zero-emissions mobility to businesses.

In June 2014, Nissan will participate in the 24 Hours of Le Mans with the NISSAN ZEOD RC and aims to set a record for the fastest all-electric, zero-emissions lap of the circuit. Nissan is committed to using the EV platform to break new ground in both the commercial-vehicle and motorsports arenas.

Why Does A Good Kettle Cost $90+? | Bigger On The Inside

$
0
0

Comments:"Why Does A Good Kettle Cost $90+? | Bigger On The Inside"

URL:http://blog.chewxy.com/2014/01/20/why-does-a-good-kettle-cost-90


Yesterday morning I woke up to a house without electricity – that meant I woke up in a puddle of sweat because the fan was no longer turned on. It turned out that my housemate, while making coffee, had tripped the mains of the house. The kettle had caused the trip. It was no longer safe to use the kettle and so I had to buy a new kettle.

The Dead Kettle

Lately, I had coincidentally been considering buying a new kettle to replace the old one, which I have used since 2007. I was also kinda tired of sticking a thermometer in the water to figure out the temperature of the water before I brew my coffee or tea. However, I never actually had a good reason to do so. The kettle we had was absolutely serviceable, and there wasn’t a good reason to change it. Now I do.

And so, I took a break from writing my book on Javascript (alternate title) and hopped online to find the best variable temperature control kettles I can find. Being the coffee and tea snob that I am, I perused a number of forums, looking for the best recommendations for such a kettle. I narrowed my choices down to the Breville BKE820XL($149), the Cuisinart CPK-17($115), or the Sunbeam KE9450($100). The main reason why I shortlisted them was because they had temperature control. There was also another brand that was mentioned – Shark, or Belle, or Da Vinci – basically Chinese knockoffs of the Cuisinart.

The one thing I noticed about these is that they all cost more than $90, which is way more than what I’m willing to pay for a kettle. My old kettle had cost me something like $10 from K-Mart. I wasn’t very willing to part with so much money for a kettle. So I decided to look for a cheaper version – I was willing to forgo the temperature control – afterall, I had been using a manual thermometer for years and it didn’t affect my making tea or coffee.

My housemate had also mentioned earlier that she would not prefer a plastic kettle and would prefer something metal. And so I went to look for normal electrical kettles. To my frustration – all the basic kettles cost roughly the same price: $39. The branded ones cost slightly more, and the non-branded ones cost slightly less. Why in the world would it cost so much? A kettle is not something difficult to build. The most difficult part is the grounding of it – which metal kettles are in sore need off, lest they give you an electric shock. But it’s really not that difficult to build a kettle, nor does it cost that much.

How to Build a Kettle With Temperature Control

I’m a big fan of conceptual thinking. I’ve once complained about some of my acquaintences’ inability to even conceptually build something, what more actually building something. In Programming Pearls, this was referred to as the skill to do back-of-the-envelope-calculation. I think it’s a very good skill to have.

So I began to conceptually build a kettle with temperature controls. I chose a top-down approach to building this in my mind. First, I built a mental model of what is needed in a basic kettle: a heating element, a receptacle to store liquid while it boils, some sort of automatic off switch – probably heat powered – to turn the switch off once it hits boiling point. Then I think about what extra stuff are needed to build the same kettle, except with temperature control. Temperature control indicates that a) a reading of the temperature of the water is required; b) a switch that could understand the reading of the water temperature.

This implied only one thing: a PID, and a relay of some sort. I know what PIDs cost – I own a couple of PID controllers and have built one from scratch. Because of the cost, that also meant that the kettle that I currently have (and that is broken), cannot possibly be digitally controlled – it’d cost too much.

Dismantling the kettle

So out of curiosity, I began to dismantle my kettle. A kettle is an extremely simple electrical appliance, really. A heating coil heats up water. When it is hot enough, it stops. And I was right – it was ridiculously simple. It simply consisted of a switch, a power source and a heating element:

I had removed the switch to remove the heating element from the jug. The image above shows a reconstruction of how the power supply connects to the heating element. Inset is the power supply connected to the switch. The red wires are to a LED which tells you that the heating element is turned on.

However, I was still puzzled by how the kettle would automatically turn off when the water boiled. And so I looked carefully at the details of everything in the kettle. Suddenly it hit me. I was looking at it all along, and it was staring back at me:

OK, paredolia asides, the metal strip in the middle actually plays a very important part in the analog automatic control. I had actually seen one of those before when I was very young. It’s actually a bimetallic strip that curls outwards when heated. It’s basically 2 pieces of metal that is pressed into one strip. Because the different metals have different expansion rates when heated, heating a bimetallic strip would cause it to bend one way.

To prove that, I took a blowtorch to it and it curled outwards exactly as I thought it would. When it cooled down, it became a flat strip again. The metal strip curved outwards. It must trigger something. Looking at the back of the power source, I found what it triggered – a spring loaded switch. Here’s how the back of the power supply looks like:

The way the kettle is constructed, the heating element and the power source are pressed together when the blue switch is pressed down to turn on the kettle. The switch is basically a see-saw. When it’s pressed down, the lever will latch on at the top (the silver coloured piece in the picture above), pressing the power source towards the heating element. The copper piece in the picture above comes into contact with the metal base plate of the heating element, and the circuit is complete. The heating element heats up.

When the bimetallic strip gets heated up, it curls outwards, pressing on the trigger. The trigger, when pressed, pushes the lever outwards, causing the latch to unlatch. Once the latch is undone, the power source is no longer pressing onto the base plate of the heating element. The circuit is broken, and the water stops heating.

There is a certain elegance to these analogue systems that one must be able to appreciate. Using a bimetallic strip to trigger a mechanical trigger is simply quite a brilliantly elegant way of solving a problem. There is no reason to build an expensive kettle with digital controls if perfectly cheap and simple analogue solutions like these exist.

How Much Does A Basic Kettle Cost

Once one knows how something works, the magic is lost. Very quickly I began tabulating how much it would cost to build and market a kettle. My answer came to a little over $7. Even with the fanciest metal designs, I estimate it’d cost no more than $12 to make and market a basic kettle with no bells and whistles. Even with the most generous of all estimates – which is to say extremely expensive industrial designers were hired to craft the shape of the bottle (let’s assume we’re talking about something like the Bugatti Vera.), and that the manufacturer uses the most expensive possible manufacturing tools, the price over a mass amount of units (say 100,000 units) would not cost more than $16.

So why did the rest of the basic kettles cost $39? Given the almost-same price from all the brands, they all have to function on the same principle essentially. They’d also have to have pretty much the same fixed and variable costs. If we assume we see the market in action, then that would mean I was wrong. It didn’t cost $16 to manufacture and market those kettles. The problem is, there exist on the market, homebrand kettles that cost $10 or so. If both homebrand and branded kettles use a similar principle, then it would be cause to wonder why the basic kettles of the bigger brands all cost $39.

Unless of course, those $39 kettles use a digital solution to stop the water from boiling. But that is just rather silly, given that an analogue solution works better. I do not see how a digital solution is actually a superior solution. It’s rather like saying replacing your car’s control panel with a touch screen is a better idea – oh wait. Tesla actually thought this was a good idea. It isn’t.

Given that a digital control solution is not going to be much superior (and in fact, like the Tesla example, could actually be inferior) – water has ONE boiling temperature – 100 Celcius, why would a basic kettle cost $39 and up? Yes, I mean, people could be paying for the design (because a red kettle is so going to boil water faster than a white kettle), but really, there is no basis in reality to pay for such an expensive kettle. Especially so if the basic kettle is digitally driven. It’s just a waste of time and energy of the designers, trying to cramp so much electronics into a small space.

No, rather, basic kettles from bigger brands cost $39 as a form of price differentiation. Consumers who can afford to pay more, or do not know better, would pay more for the same basic product. Anything extra is mostly cosmetic and has no say on the functionality (or if it does, like for example, if the shape or material of a kettle would help it retain heat better, it doesn’t add much marginal utility).

How to Build a Kettle With Temperature Control – pt2

Which brings me to why I actually want a kettle with temperature control. Unlike a basic kettle, I understand why a more advanced kettle with variable temperatures would cost so much more. As previously mentioned, having temperature control means requiring something to read the temperature of the water – a temperature probe (I own a couple of PT100 probes myself), and a PID controller. Those things are not cheap, even if one buys them in bulk. A basic PID would set one back about $30 (and that was the best bulk price I could get). With PID controllers also come costs of hiring electronic designers, and micro-controller programmers. Also, a relay is required to turn on and off the kettle, instead of mechanical solutions like the one above. Furthermore, these electronics have to be heatproofed. A kettle is a very hot thing, and the microcontrollers in the kettle must not fail under heated circumstances.

All these little things add up. But it’s actually useful. Instead of hiring electronic designers, programmers, and using electronics for basic functionality, the main use is for precision boiling. And that’s a good use. This is one of the main reasons why I am looking at variable temperature kettles (okay, I admit, it’d make my life a lot easier too, if I didn’t have to stand and take the temperature of the water every time I make my coffee or tea).

The reason is I don’t feel ripped off paying for a variable temperature kettle. Knowing how a basic analogue kettle works and especially knowing how much it would cost to build one, I would feel terrible if I had to spend $40 on a kettle that works exactly the same way in principle. And yes, before you ask, the ones I saw, they DO work on the same exact principles. How do I know this? The switch in those kettles are mechanical. A digital kettle wouldn’t have a mechanical switch (you can design one that has, but I don’t see the point).

Why Does A Good Kettle Cost $90+?

Here’s a rough bill of materials/services and estimates I came out with, assuming a unit cost over 100,000 units in a production run:

Description Estimated Cost Manufacture of kettle body (metal, because plastic is so pedestrian) $5 Industrial design of kettle body (assuming 3 high calibre ($100k p/a) industrial designer) $7 PID Controller + other electronics $10 Heating element $3 Electronics engineer (assuming 3 electronics engineer and 2 test technicians) $7 Various manufacturing fixed cost (safety tests, product tests, etc) $5 Marketing, stocking, etc fees $5

It comes close to $45. And these are on quite generous assumptions. I have no doubt that given the scale of these big branded companies, they would have more economies of scale and scope to reduce these costs even further. Of couse, this being a mental estimate, it could go very wrong. It could be the companies don’t manufacture 100,000 units per run. That would change things quite a bit.

This means a markup of double. But the main reason why I would rather go for a $90 kettle instead of a $39 kettle is because the $90 kettle does a lot more. There is quite a bit more function per dollar than the $39 one.

But …

Of course, then there’s the Hamilton Beach programmable kettle, which comes pretty close to the estimated cost price of building a variable temperature kettle I had estimated. I had estimated it costs about $45 to build and market a digital kettle with basic PID functions. I was willing to accept advanced kettles cost about $90 because PID tuning is a variable quality, and I’d be willing to accept that companies like Bonavita or Breville actually spend more time and money into tuning these little details. I don’t know how Hamilton Beach did it, and I’m exceedingly curious as to how. The people at Steepster don’t think too highly of it though, so there’s some clues there.

As for which kettle I bought, the house is currently kettle-less while I figure out which one to buy. I’m leaning heavily on the Cuisinart CPK-17. I am also quite partial to just buying a water warmer like the Zojirushi, mainly because my mum uses one and I grew up with one, but who knows, I might just go buy another $10 kettle.

TR;DR: I seem to have given a lot of thought to buying a kettle (this article is 2250 words long). I come to the conclusion that expensive basic kettles are the same as cheap basic kettles, except for price differentiation. The increase in marginal utility from purchasing a expensive kettle is not big enough compared to buying a baseline cheap kettle to warrant it. It’d be better to buy a temperature control kettle, as then the increase in marginal utility would be enough to warrant puchasing it. What do you think?

Thanks for reading so far. You may also be interested in:

Bill and Melinda Gates on Three Myths on the World's Poor - WSJ.com

$
0
0

Comments:"Bill and Melinda Gates on Three Myths on the World's Poor - WSJ.com"

URL:http://online.wsj.com/news/articles/SB10001424052702304149404579324530112590864


Jan. 17, 2014 7:50 p.m. ET

By almost any measure, the world is better off now than it has ever been before. Extreme poverty has been cut in half over the past 25 years, child mortality is plunging, and many countries that had long relied on foreign aid are now self-sufficient.

So why do so many people seem to think things are getting worse? Much of the reason is that all too many people are in the grip of three deeply damaging myths about global poverty and development. Don't get taken in by them.

MYTH ONE: Poor countries are doomed to stay poor.

They're really not. Incomes and other measures of human welfare are rising almost everywhere—including Africa.

Take Mexico City, for instance. In 1987, when we first visited, most homes lacked running water, and we often saw people trekking on foot to fill up water jugs. It reminded us of rural Africa. The guy who ran Microsoft'sMSFT -1.38%Microsoft Corp.U.S.: Nasdaq $36.38-0.51-1.38% Jan. 17, 2014 4:00 pm Volume (Delayed 15m) : 45.63MAFTER HOURS $36.39+0.01+0.03% Jan. 17, 2014 7:35 pm Volume (Delayed 15m):673,315 P/E Ratio13.47Market Cap $303.70 Billion Dividend Yield3.08% Rev. per Employee$810,01001/16/14 U.S. Videogame Sales Fell in D...01/16/14 Why Alan Mulally Ended His Fli...01/15/14 GoDaddy Hires New CIOMore quote details and news »MSFTinYour ValueYour ChangeShort position Mexico City office would send his kids back to the U.S. for checkups to make sure the smog wasn't making them sick.

Today, Mexico City is mind-blowingly different, boasting high-rise buildings, cleaner air, new roads and modern bridges. You still find pockets of poverty, but when we visit now, we think, "Wow—most people here are middle-class. What a miracle." You can see a similar transformation in Nairobi, New Delhi, Shanghai and many more cities around the world.

In our lifetimes, the global picture of poverty has been completely redrawn. Per-person incomes in Turkey and Chile are where the U.S. was in 1960. Malaysia is nearly there. So is Gabon. Since 1960, China's real income per person has gone up eightfold. India's has quadrupled, Brazil's has almost quintupled, and tiny Botswana, with shrewd management of its mineral resources, has seen a 30-fold increase. A new class of middle-income nations that barely existed 50 years ago now includes more than half the world's population.

And yes, this holds true even in Africa. Income per person in Africa has climbed by two-thirds since 1998—from just over $1,300 then to nearly $2,200 today. Seven of the 10 fastest-growing economies of the past half-decade are in Africa.

Here's our prediction: By 2035, there will be almost no poor countries left in the world. Yes, a few unhappy countries will be held back by war, political realities (such as North Korea) or geography (such as landlocked states in central Africa). But every country in South America, Asia and Central America (except perhaps Haiti) and most in coastal Africa will have become middle-income nations. More than 70% of countries will have a higher per-person income than China does today.

MYTH TWO: Foreign aid is a big waste.

Actually, it is a phenomenal investment. Foreign aid doesn't just save lives; it also lays the groundwork for lasting, long-term economic progress.

Many people think that foreign aid is a large part of the budgets of rich countries. When pollsters ask Americans what share of the budget goes to aid, the most common response is "25%." In fact, it is less than 1%. (Even Norway, the most generous nation in the world, spends less than 3%.) The U.S. government spends more than twice as much on farm subsidies as on international health aid. It spends more than 60 times as much on the military.

One common complaint about foreign aid is that some of it gets wasted on corruption—and of course, some of it does. But the horror stories you hear—where aid just helps a dictator build new palaces—mostly come from a time when aid was designed to win allies for the Cold War rather than to improve people's lives.

The problem today is much smaller. Small-scale corruption, like a government official who puts in for phony travel expenses, is an inefficiency that amounts to a tax on aid. We should try to reduce it, but we can't eliminate it, any more than we can eliminate waste from every government program—or from every business, for that matter. Suppose small-scale corruption amounts to a 2% tax on the cost of saving a life. We should try to cut that. But if we can't, should we stop trying to save those lives?

We've heard plenty of people calling to shut down aid programs if one dollar of corruption is found. But four of the past seven governors of Illinois went to prison for corruption, and no one is demanding that Illinois's schools be shut down or its highways closed.

We also hear critics complain that aid keeps countries dependent on outsiders' generosity. But this argument focuses only on the most difficult remaining cases still struggling to be self-sufficient. Here is a quick list of former major aid recipients that have grown so much that they receive hardly any aid today: Brazil, Mexico, Chile, Costa Rica, Peru, Thailand, Mauritius, Botswana, Morocco, Singapore and Malaysia.

Aid also drives improvements in health, agriculture and infrastructure that correlate strongly with long-run growth. A baby born in 1960 had an 18% chance of dying before her fifth birthday. For a child born today, it is less than 5%. In 2035, it will be 1.6%. We can't think of any other 75-year improvement in human welfare that would even come close. A waste? Hardly.

MYTH THREE: Saving lives leads to overpopulation.

Going back at least to Thomas Malthus in 1798, people have worried about doomsday scenarios in which food supply can't keep up with population growth. This kind of thinking has gotten the world in a lot of trouble. Anxiety about the size of the world population has a dangerous tendency to override concern for the human beings who make up that population.

Letting children die now so they don't starve later isn't just heartless. It also doesn't work, thank goodness.

It may be counterintuitive, but the countries with the most death have among the fastest-growing populations in the world. This is because the women in these countries tend to have the most births too.

When more children survive, parents decide to have smaller families. Consider Thailand. Around 1960, child mortality started going down. Then around 1970, after the government invested in a strong family planning program, birthrates started to drop. In the course of just two decades, Thai women went from having six children on average to having just two. Today, child mortality in Thailand is almost as low as it is in the U.S., and Thai women have an average of 1.6 children. This pattern of falling death rates followed by falling birthrates applies for the vast majority the world.

Saving lives doesn't lead to overpopulation. Just the opposite. Creating societies where people enjoy basic health, relative prosperity, fundamental equality and access to contraceptives is the only way to a sustainable world.

More people, especially political leaders, need to know about the misconceptions behind these myths. The fact is, whether you look at the issue as an individual or a government, contributions to promote international health and development offer an astonishing return. We all have the chance to create a world where extreme poverty is the exception rather than the rule.

—This piece is adapted from the forthcoming annual letter of the Bill & Melinda Gates Foundation, of which the authors are co-chairs. Mr. Gates is the chairman of Microsoft. To receive the annual letter, sign up at gatesletter.com.

cerebris/orbit.js · GitHub

$
0
0

Comments:"cerebris/orbit.js · GitHub"

URL:https://github.com/cerebris/orbit.js


Orbit.js

Orbit is a standalone library for coordinating access to data sources and keeping their contents synchronized.

Orbit provides a foundation for building advanced features in client-side applications such as offline operation, maintenance and synchronization of local caches, undo / redo stacks and ad hoc editing contexts.

Orbit relies heavily on promises, events and low-level transforms.

Goals

  • Support any number of different data sources in an application and provide access to them through common interfaces.

  • Allow for the fulfillment of requests by different sources, including the ability to specify priority and fallback plans.

  • Allow records to simultaneously exist in different states across sources.

  • Coordinate transformations across sources. Handle merges automatically where possible but allow for complete custom control.

  • Allow for blocking and non-blocking transformations.

  • Allow for synchronous and asynchronous requests.

  • Support transactions and undo/redo by tracking inverses of operations.

  • Work with plain JavaScript objects.

How does it work?

Orbit requires that every data source support one or more common interfaces. These interfaces define how data can be both accessed and transformed.

Orbit includes several data sources: an in-memory cache, a local storage source, and a source for accessing RESTful APIs via AJAX. You can define your own data sources that will work with Orbit as long as they support Orbit's interfaces.

The methods for accessing and transforming data return promises. These promises might be fulfilled synchronously or asynchronously. Once fulfilled, events are triggered to indicate success or failure. Any event listeners can engage with an event by returning a promise. In this way, multiple data sources can be involved in a single action.

Standard connectors are supplied for listening to events on a data source and calling corresponding actions on a target. These connectors can be blocking (i.e. they don't resolve until all associated actions are resolved) or non-blocking (i.e. associated actions are resolved in the background without blocking the flow of the application). Connectors can be used to enable uni or bi-directional flow of data between sources.

Dependencies

Orbit.js has no specific external dependencies, but must be used with a library that implements thePromises/A+ spec, such as RSVP.

Simple Example

// Create data sources with a common schemavarschema={idField:'__id',models:{planet:{}}};varmemorySource=newOrbit.MemorySource(schema);varrestSource=newOrbit.JSONAPISource(schema);varlocalSource=newOrbit.LocalStorageSource(schema);// Connect MemorySource -> LocalStorageSource (using the default blocking strategy)varmemToLocalConnector=newOrbit.TransformConnector(memorySource,localSource);// Connect MemorySource <-> JSONAPISource (using the default blocking strategy)varmemToRestConnector=newOrbit.TransformConnector(memorySource,restSource);varrestToMemConnector=newOrbit.TransformConnector(restSource,memorySource);// Add a record to the memory sourcememorySource.add('planet',{name:'Jupiter',classification:'gas giant'}).then(function(planet){console.log('Planet added - ',planet.name,'(id:',planet.id,')');});// Log the transforms in all sourcesmemorySource.on('didTransform',function(operation,inverse){console.log('memorySource',operation);});localSource.on('didTransform',function(operation,inverse){console.log('localSource',operation);});restSource.on('didTransform',function(operation,inverse){console.log('restSource',operation);});// CONSOLE OUTPUT//// memorySource {op: 'add', path: 'planet/1', value: {__id: 1, name: 'Jupiter', classification: 'gas giant'}}// localSource {op: 'add', path: 'planet/1', value: {__id: 1, name: 'Jupiter', classification: 'gas giant'}}// restSource {op: 'add', path: 'planet/1', value: {__id: 1, id: 12345, name: 'Jupiter', classification: 'gas giant'}}// memorySource {op: 'add', path: 'planet/1/id', value: 12345}// localSource {op: 'add', path: 'planet/1/id', value: 12345}// Planet added - Jupiter (id: 12345)

In this example, we've created three separate sources and connected them with transform connectors that are blocking. In other words, the promise returned from an action won't be fulfilled until every event listener that engages with it (by returning a promise) has been fulfilled.

In this case, we're adding a record to the memory source, which the connectors help duplicate in both the REST source and local storage. The REST source returns an id from the server, which is then propagated back to the memory source and then the local storage source.

Note that we could also connect the sources with non-blocking connectors with the blocking: false option:

// Connect MemorySource -> LocalStorageSource (non-blocking)varmemToLocalConnector=newOrbit.TransformConnector(memorySource,localSource,{blocking:false});// Connect MemorySource <-> JSONAPISource (non-blocking)varmemToRestConnector=newOrbit.TransformConnector(memorySource,restSource,{blocking:false});varrestToMemConnector=newOrbit.TransformConnector(restSource,memorySource,{blocking:false});

In this case, the promise generated from memorySource.add will be resolved immediately, after which records will be asynchronously created in the REST source and local storage. Any differences, such as an id returned from the server, will be automatically patched back to the record in the memory source.

Interfaces

The primary interfaces provided by Orbit are:

  • Requestable - for managing requests for data via methods such as find,create, update and destroy.

  • Transformable - for keeping data sources in sync through low level transformations which follow the JSON PATCH spec detailed inRFC 6902.

These interfaces can extend (i.e. be "mixed into") your data sources. They can be used together or in isolation.

Requestable

The Requestable interface provides a mechanism to define custom "action" methods on an object or prototype. Actions might typically include find,add, update, patch and remove, although the number and names of actions can be completely customized.

The Requestable interface can extend an object or prototype as follows:

varsource={};Orbit.Requestable.extend(source);

This will make your object Evented (see below) and create a single action,find, by default. You can also specify alternative actions as follows:

varsource={};Orbit.Requestable.extend(source,['find','add','update','patch','remove']);

Or you can add actions later with Orbit.Requestable.defineAction():

varsource={};Orbit.Requestable.extend(source);// defines 'find' by defaultOrbit.Requestable.defineAction(source,['add','update','remove']);Orbit.Requestable.defineAction(source,'patch');

In order to fulfill the contract of an action, define a default "handler" method with the name of the action preceded by an underscore (e.g. _find). This handler performs the action and returns a promise. Here's a simplistic example:

source._find=function(type,id){returnnewRSVP.Promise(function(resolve,reject){if(source._data[type]&&source._data[type][id]){resolve(source._data[type][id]);}else{reject(type+' not found');}});};

Actions combine promise-based return values with an event-driven flow. Events can be used to coordinate multiple handlers interested in participating with or simply observing the resolution of an action.

The following events are associated with an action (find in this case):

  • assistFind - triggered prior to calling the default _find handler. Listeners can optionally return a promise. If any promise resolves successfully, its resolved value will be used as the return value offind, and no further listeners will called.

  • rescueFind - if assistFind and the default _find method fail to resolve, then rescueFind will be triggered. Again, listeners can optionally return a promise. If any promise resolves successfully, its resolved value will be used as the return value of find, and no further listeners will called.

  • didFind - Triggered upon the successful resolution of the action by any handler. Any promises returned by event listeners will be settled in series before proceeding.

  • didNotFind - Triggered when an action can't be resolved by any handler. Any promises returned by event listeners will be settled in series before proceeding.

Note that the arguments for actions can be customized for your application. Orbit will simply pass them through regardless of their number and type. You will typically want actions of the same name (e.g. find) to accept the same arguments across your data sources.

Let's take a look at how this could all work:

// Create some new sources - assume their prototypes are already `Requestable`varmemorySource=newOrbit.MemorySource();varrestSource=newOrbit.JSONAPISource();varlocalSource=newOrbit.LocalStorageSource();////// Connect the sources via events// Check local storage before making a remote callrestSource.on('assistFind',localSource.find);// If the in-memory source can't find the record, query our rest servermemorySource.on('rescueFind',restSource.find);// Audit success / failurememorySource.on('didFind',function(type,id,record){audit('find',type,id,true);});memorySource.on('didNotFind',function(type,id,error){audit('find',type,id,false);});////// Perform the actionmemorySource.find('contact',1).then(function(contact){// do something with the contact},function(error){// there was a problem});

Configuration can (and probably should) be done well in advance of actions being called. You essentially want to hook up the wiring between sources and then restrict your application's direct access to most of them. This greatly simplifies your application code: instead of chaining together a large number of promises that include multiple sources in every call, you can interact with a single source of truth (typically an in-memory data source).

Transformable

Although the Requestable interface can help multiple sources coordinate in fulfilling a request, it's not sufficient to keep data sources synchronized. When one source fields a request, other sources may need to be notified of the precise data changes brought about in that source, so that they can all stay synchronized. That's where the Transformable interace comes in...

The Transformable interface provides a single method, transform, which can be used to change the contents of a source. Transformations must follow the JSON PATCH spec detailed inRFC 6902. They must specify an operation (add, remove, or replace), a path, and a value. For instance, the following transformations add, patch and then remove a record:

{op:'add',path:'planet/1',value:{__id:1,name:'Jupiter',classification:'gas giant'}{op:'replace',path:'planet/1/name',value:'Earth'}{op:'remove',path:'planet/1'}

The Transformable interface can extend an object or prototype as follows:

varsource={};Orbit.Transformable.extend(source);

This will ensure that your source is Evented (see below). It also adds atransform method. In order to fulfill the transform method, your source should implement a _transform method that performs the transform and returns a promise if the transformation is asynchronous.

It's important to note that the requested transform may not match the actual transform applied to a source. Therefore, each source should call didTransform for any transforms that actually take place. This method triggers thedidTransform event, which returns the operation and an array of inverse operations.

transform may be called with a single transform operation, or an array of operations. Any number of didTransform events may be triggered as a result.

Transforms will be queued and performed serially in the order requested.

TransformConnector

A TransformConnector watches a transformable source and propagates any transforms to a transformable target.

Each connector is "one way", so bi-directional synchronization between sources requires the creation of two connectors.

Connectors can be "blocking" or "non-blocking". The difference is that "blocking" connectors will return a promise to the didTransform event, which will prevent the original transform from resolving until the promise itself has resolved. "Non-blocking" transforms do not block the resolution of the original transform - asynchronous actions are performed afterward.

If the target of a connector is busy processing transformations, then the connector will queue operations until the target is free. This ensures that the target's state is as up to date as possible before transformations proceed.

The connector's transform method actually applies transforms to its target. This method attempts to retrieve the current value at the path of the transformation and resolves any conflicts with the connector'sresolveConflicts method. By default, a simple differential is applied to the target, although both transform and resolveConflicts can be overridden to apply an alternative differencing algorithm.

Document

Document is a complete implementation of the JSON PATCH spec detailed inRFC 6902.

It can be manipulated via a transform method that accepts an operation, or with methods add, remove, replace, move, copy and test.

Data at a particular path can be retrieved from a Document with retrieve().

Notifications and Events

Orbit also contains a couple classes for handling notifications and events. These will likely be separated into one or more microlibs.

Notifier

The Notifier class can emit messages to an array of subscribed listeners. Here's a simple example:

varnotifier=newOrbit.Notifier();notifier.addListener(function(message){console.log("I heard "+message);});notifier.addListener(function(message){console.log("I also heard "+message);});notifier.emit('hello');// logs "I heard hello" and "I also heard hello"

Notifiers can also poll listeners with an event and return their responses:

vardailyQuestion=newOrbit.Notifier();dailyQuestion.addListener(function(question){if(question==='favorite food?')return'beer';});dailyQuestion.addListener(function(question){if(question==='favorite food?')return'wasabi almonds';});dailyQuestion.addListener(function(question){// this listener doesn't return anything, and therefore won't participate// in the poll});notifier.poll('favorite food?');// returns ['beer', 'wasabi almonds']

Calls to emit and poll will send along all of their arguments.

Evented

The Evented interface uses notifiers to add events to an object. Like notifiers, events will send along all of their arguments to subscribed listeners.

The Evented interface can extend an object or prototype as follows:

varsource={};Orbit.Evented.extend(source);

Listeners can then register themselves for particular events with on:

varlistener1=function(message){console.log('listener1 heard '+message);},listener2=function(message){console.log('listener2 heard '+message);};source.on('greeting',listener1);source.on('greeting',listener2);evented.emit('greeting','hello');// logs "listener1 heard hello" and// "listener2 heard hello"

Listeners can be unregistered from events at any time with off:

source.off('greeting',listener2);

A listener can register itself for multiple events at once:

source.on('greeting salutation',listener2);

And multiple events can be triggered sequentially at once, assuming that you want to pass them all the same arguments:

source.emit('greeting salutation','hello','bonjour','guten tag');

Last but not least, listeners can be polled, just like in the notifier example (note that spaces can't be used in event names):

source.on('question',function(question){if(question==='favorite food?')return'beer';});source.on('question',function(question){if(question==='favorite food?')return'wasabi almonds';});source.on('question',function(question){// this listener doesn't return anything, and therefore won't participate// in the poll});source.poll('question','favorite food?');// returns ['beer', 'wasabi almonds']

License

Copyright 2014 Cerebris Corporation. MIT License (see LICENSE for details).

A peek at Emacs 24.4: auto-indentation by default - Emacs Redux


Anatomy of a Program in Memory : Gustavo Duarte

$
0
0

Comments:" Anatomy of a Program in Memory : Gustavo Duarte"

URL:http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory


Memory management is the heart of operating systems; it is crucial for both programming and system administration. In the next few posts I’ll cover memory with an eye towards practical aspects, but without shying away from internals. While the concepts are generic, examples are mostly from Linux and Windows on 32-bit x86. This first post describes how programs are laid out in memory.

Each process in a multi-tasking OS runs in its own memory sandbox. This sandbox is the virtual address space, which in 32-bit mode is always a 4GB block of memory addresses. These virtual addresses are mapped to physical memory by page tables, which are maintained by the operating system kernel and consulted by the processor. Each process has its own set of page tables, but there is a catch. Once virtual addresses are enabled, they apply to all software running in the machine, including the kernel itself. Thus a portion of the virtual address space must be reserved to the kernel:

This does not mean the kernel uses that much physical memory, only that it has that portion of address space available to map whatever physical memory it wishes. Kernel space is flagged in the page tables as exclusive to privileged code (ring 2 or lower), hence a page fault is triggered if user-mode programs try to touch it. In Linux, kernel space is constantly present and maps the same physical memory in all processes. Kernel code and data are always addressable, ready to handle interrupts or system calls at any time. By contrast, the mapping for the user-mode portion of the address space changes whenever a process switch happens:

Blue regions represent virtual addresses that are mapped to physical memory, whereas white regions are unmapped. In the example above, Firefox has used far more of its virtual address space due to its legendary memory hunger. The distinct bands in the address space correspond to memory segments like the heap, stack, and so on. Keep in mind these segments are simply a range of memory addresses and have nothing to do with Intel-style segments. Anyway, here is the standard segment layout in a Linux process:

When computing was happy and safe and cuddly, the starting virtual addresses for the segments shown above were exactly the same for nearly every process in a machine. This made it easy to exploit security vulnerabilities remotely. An exploit often needs to reference absolute memory locations: an address on the stack, the address for a library function, etc. Remote attackers must choose this location blindly, counting on the fact that address spaces are all the same. When they are, people get pwned. Thus address space randomization has become popular. Linux randomizes the stack, memory mapping segment, and heap by adding offsets to their starting addresses. Unfortunately the 32-bit address space is pretty tight, leaving little room for randomization and hampering its effectiveness.

The topmost segment in the process address space is the stack, which stores local variables and function parameters in most programming languages. Calling a method or function pushes a new stack frame onto the stack. The stack frame is destroyed when the function returns. This simple design, possible because the data obeys strict LIFO order, means that no complex data structure is needed to track stack contents – a simple pointer to the top of the stack will do. Pushing and popping are thus very fast and deterministic. Also, the constant reuse of stack regions tends to keep active stack memory in the cpu caches, speeding up access. Each thread in a process gets its own stack.

It is possible to exhaust the area mapping the stack by pushing more data than it can fit. This triggers a page fault that is handled in Linux by expand_stack(), which in turn calls acct_stack_growth() to check whether it’s appropriate to grow the stack. If the stack size is below RLIMIT_STACK (usually 8MB), then normally the stack grows and the program continues merrily, unaware of what just happened. This is the normal mechanism whereby stack size adjusts to demand. However, if the maximum stack size has been reached, we have a stack overflow and the program receives a Segmentation Fault. While the mapped stack area expands to meet demand, it does not shrink back when the stack gets smaller. Like the federal budget, it only expands.

Dynamic stack growth is the only situation in which access to an unmapped memory region, shown in white above, might be valid. Any other access to unmapped memory triggers a page fault that results in a Segmentation Fault. Some mapped areas are read-only, hence write attempts to these areas also lead to segfaults.

Below the stack, we have the memory mapping segment. Here the kernel maps contents of files directly to memory. Any application can ask for such a mapping via the Linux mmap() system call (implementation) or CreateFileMapping() / MapViewOfFile() in Windows. Memory mapping is a convenient and high-performance way to do file I/O, so it is used for loading dynamic libraries. It is also possible to create an anonymous memory mapping that does not correspond to any files, being used instead for program data. In Linux, if you request a large block of memory via malloc(), the C library will create such an anonymous mapping instead of using heap memory. ‘Large’ means larger than MMAP_THRESHOLD bytes, 128 kB by default and adjustable via mallopt().

Speaking of the heap, it comes next in our plunge into address space. The heap provides runtime memory allocation, like the stack, meant for data that must outlive the function doing the allocation, unlike the stack. Most languages provide heap management to programs. Satisfying memory requests is thus a joint affair between the language runtime and the kernel. In C, the interface to heap allocation is malloc() and friends, whereas in a garbage-collected language like C# the interface is the new keyword.

If there is enough space in the heap to satisfy a memory request, it can be handled by the language runtime without kernel involvement. Otherwise the heap is enlarged via the brk() system call (implementation) to make room for the requested block. Heap management is complex, requiring sophisticated algorithms that strive for speed and efficient memory usage in the face of our programs’ chaotic allocation patterns. The time needed to service a heap request can vary substantially. Real-time systems have special-purpose allocators to deal with this problem. Heaps also become fragmented, shown below:

Finally, we get to the lowest segments of memory: BSS, data, and program text. Both BSS and data store contents for static (global) variables in C. The difference is that BSS stores the contents of uninitialized static variables, whose values are not set by the programmer in source code. The BSS memory area is anonymous: it does not map any file. If you say static int cntActiveUsers, the contents of cntActiveUsers live in the BSS.

The data segment, on the other hand, holds the contents for static variables initialized in source code. This memory area is not anonymous. It maps the part of the program’s binary image that contains the initial static values given in source code. So if you say static int cntWorkerBees = 10, the contents of cntWorkerBees live in the data segment and start out as 10. Even though the data segment maps a file, it is a private memory mapping, which means that updates to memory are not reflected in the underlying file. This must be the case, otherwise assignments to global variables would change your on-disk binary image. Inconceivable!

The data example in the diagram is trickier because it uses a pointer. In that case, the contents of pointer gonzo – a 4-byte memory address – live in the data segment. The actual string it points to does not, however. The string lives in the text segment, which is read-only and stores all of your code in addition to tidbits like string literals. The text segment also maps your binary file in memory, but writes to this area earn your program a Segmentation Fault. This helps prevent pointer bugs, though not as effectively as avoiding C in the first place. Here’s a diagram showing these segments and our example variables:

You can examine the memory areas in a Linux process by reading the file /proc/pid_of_process/maps. Keep in mind that a segment may contain many areas. For example, each memory mapped file normally has its own area in the mmap segment, and dynamic libraries have extra areas similar to BSS and data. The next post will clarify what ‘area’ really means. Also, sometimes people say “data segment” meaning all of data + bss + heap.

You can examine binary images using the nm and objdump commands to display symbols, their addresses, segments, and so on. Finally, the virtual address layout described above is the “flexible” layout in Linux, which has been the default for a few years. It assumes that we have a value for RLIMIT_STACK. When that’s not the case, Linux reverts back to the “classic” layout shown below:

That’s it for virtual address space layout. The next post discusses how the kernel keeps track of these memory areas. Coming up we’ll look at memory mapping, how file reading and writing ties into all this and what memory usage figures mean.

Sneak into tech through the back door: a hopelessly limited how-to by Danilo Campos

$
0
0

Comments:"Sneak into tech through the back door: a hopelessly limited how-to by Danilo Campos"

URL:http://danilocampos.com/2014/01/sneak-into-tech-through-the-back-door-a-hopelessly-limited-how-to/


In 2008, while drowning in debt and yet another sticky Orlando summer, I decided I would change my career.

I would get a job making software for my iPhone.

The path forward wasn’t especially clear. My route was imperfect and entirely improvisational. I didn’t even realize I’d end up in San Francisco. Within five years, though, I would build and launch eight separate apps. The last three were under the aegis of venture-funded startups.

If you told me back then how many people would download and use my work, I’d have shit myself.

Sometimes people ask me how I managed to make this happen. Shared below is what worked for me. This is not comprehensive or universal advice. Nor is it the only way to get a job in this industry. It may not even be relevant any longer, given how much time has passed since I began. Timing may have been critical to some of what I tried. Standard warnings of survivorship bias apply.

Still, there’s a lot of possibility in careers like these. If there’s any part of my adventure that might help light your way, I’m happy to share.

Decide on what you want

You need to start with a goal in mind.

I want a career in tech.

Nope. Too broad.

Decide on a specific area of work you want to be doing. For me, this looked like

I want to make iPhone apps

because that was exciting! Little computers in your pockets with beautiful, full-color touch screens and constant internet connections. The possibilities seemed endless.

I think you probably need to pick something you find personally exciting because the road is long. If you hate the work the journey won’t seem like it’s worth taking. Intrinsic reward keeps your chin up.

Identify the very next thing you can do to move forward

For me, “making iPhone apps” was an excitement mostly driven by interface design. I was always fascinated by computer interfaces and I wanted a chance to create more of my own.

But a UI won’t program itself.

If I wanted my designs to do anything, I’d need code to bring them to life. With plenty of debt and no savings, that left only one option:

I was going to have to learn how to program for the iPhone.

At first I wasn’t sure that was a plausible ambition. I asked a couple friends who were web developers if they thought I could figure it out.

You might want to try something a little simpler to start. I’ve heard it’s hard.

And they were right. It was pretty hard. The good news was that, years before, I had tried something simpler.

With 18 months under my belt coding for the Second Life scripting engine, I knew about variables, types, functions, loops, arrays and more. The very next thing I could do to advance my goal was to learn iPhone development.

With that knowledge, I could make an app. Which was what I wanted.

Learn relentlessly and without excuses

I decided that every night, no matter what, I’d spend an hour with a programming book. Not every night was wildly productive but constant exposure to the material prevented the rapid atrophy that would have otherwise destroyed my new programming muscles.

I moved slowly, but I never lost any ground.

Beyond discipline, though, lay a more practical challenge: at the time that I decided I wanted to learn about iPhone apps, Apple had imposed a non-disclosure agreement upon everyone who knew anything about the subject. No technical books could be published, nor blog posts, nor screencasts.

The only authority publishing anything about the iPhone was Apple itself. Which had things like this to say:

This table has entries that associate method selectors with the class-specific addresses of the methods they identify. The selector for the setOrigin:: method is associated with the address of (the procedure that implements) setOrigin::, the selector for the display method is associated with display’s address, and so on.

Which was always good for a

What the fuck?

Waiting for books to turn up would have slowed me down by almost half a year. I was impatient.

I was also in luck.

iPhone development had a language, toolchain and more than a few frameworks in common with another platform: OS X. By 2008, OS X was nearly a decade old. Plenty had been written about making software for your Mac.

So I resolved to learn Mac development until I understood enough to decrypt Apple’s documentation, or until someone could publish a book.

I kept moving forward.

Choose a project, then finish it

I first cracked Cocoa Programming for Mac OS X in July. By October, I’d learned enough that I could puzzle out how to do things for the iPhone. Sorta.

My first project was a game. The deal was, I’d show you four things and one of them wasn’t like the others. Pick the odd one to advance. Faster answers earned more points. Difficulty scaled by increasing to six options, then eight, then 10.

Straightforward parameters. A clear idea of when it was done.

The important thing was getting it done: it wasn’t enough to noodle. Choosing a project forced me to learn a variety of new skills and subjects. At the beginning, I didn’t understand the technical underpinnings of the basic rules I’d decided on.

By the end, I’d learned about timers, animation, managing collections, button styling, the modulo operator (!) and countless other topics. Projects force you to blaze a path in the dark. You’ll find treasures in there.

The only way to guarantee thorough explorations is to agree with yourself that you’ll finish the project. Without that expectation, it’s easy to waltz around doing what’s comfortable, while avoiding the stuff that makes you uneasy.

The uneasy stuff? That’s what you need to grow.

Then choose another project and finish it, too

I sold my first app in the App Store for $2.99. I made $322.14 in its first month of sales.

Not bad, but I couldn’t pay my rent on it.

Nor was anyone going to offer me a job on the strength of one project alone.

Once the first app hit the store, I went to work on a second project. This one was about counting – I’m bad at mental math. Within three weeks, the 1.0 was released on the App Store. 1.0 pulled in about $220 in the first two weeks, which felt pretty good.

I liked the app, so I spent a couple more weeks working on an update to add a few features users had asked for.

Apple liked this and put my app on the front page of the App Store. Total earnings that month exceeded $1,500.

By getting projects out the door, I was getting useful feedback about my work. It was also satisfying to know people were really using it.

Most importantly, it was an ongoing test. Earlier, I’d said,

I want to make iPhone apps

With two apps in the store, one thing was certain: I had been right about what I wanted.

Find a way to network (also without excuses)

A network is critical to success in any field. So it is with technology. Your network will pass along opportunities. Your network will help you learn. Your network will keep you up-to-date on important news.

I suck at networking.

I kinda suck at people, in truth.

Making friends is hard and, as an introvert, it comes at a certain energy cost. Here I got lucky: someone published a Python script and companion data source that would auto-follow hundreds of iPhone developers on Twitter. I added myself to the data source and figured out how to use the Python script.

Overnight, I had dozens of people eager to chat about iPhone development with me in realtime. For months, I’d been toiling alone.

Suddenly, I was part of a community. It would make all the difference in the world.

More than once.

Find a co-conspirator

My girlfriend at the time was effectively a vice president in my fledgling software enterprise. She helped with product names, gave honest feedback on design, and did internal QA on builds before they were submitted to Apple. Having someone who could consistently participate in the process made things a bit less lonely, while ensuring my blind spots got a regular check.

She also co-authored the fast-evolving plan to escape Orlando.

Hire yourself

By the summer of 2009, one year after committing to a future in iPhone apps, a lot had changed.

I had recurring, passive income from the App Store totaling about $1,000 per month. I had published three apps – the third being a travel data organizing tool that read your data from TripIt. I’d written thousands of lines of Objective-C and had begun to grok object-oriented programming.

What hadn’t changed was a desire to make iPhone apps. I wanted it more than ever.

No one was offering me a job, though. No iPhone jobs existed in Orlando. My resume was too thin to win the attention of further-flung hiring managers.

With the financial crisis looming, I did something pretty crazy: I left my employer of six years.

It was a great gig. My leadership respected me. They paid me generously. The challenges were varied. The lunch was free. I had it pretty good.

But I wasn’t making iPhone apps.

I reasoned that I’d need someone to hire me at least once in order for anyone else to think it was a good idea. I decided I’d roll the dice and hire myself. So I moved to a small, cheap, beautiful town on the west coast and made iPhone development my full-time occupation. A big part of me hoped I could make it all work on my own, employer be damned.

I couldn’t.

Use that network

I went all-in on both Bend, Oregon and on iPhone development. Between the recurring income and over $17,000 squirreled away from six months of frugal living, my then-girlfriend and I figured we could last 18 months.

If we could find part-time jobs.

It turned out picking up a few hours of part-time work was next to impossible against Bend’s 19% unemployment rate. Seven months after arriving in Bend, we were nearly out of money.

Weeks from living in my car.

Shit was scary. Moving was definitely necessary, but I wasn’t getting too far with my remote job hunting. I knew a guy on Twitter who’d liked my counting app, and he’d founded a startup.

In the nick of time, I had an offer with a relocation deal.

Thank god for networking.

Have a plan for being discovered and understood

My first startup job was complicated. I don’t think they knew quite what they wanted from the role. I wasn’t in a great position to help them figure it out – it being my very first startup job. Making matters worse was that I wasn’t actually making iPhone apps. I didn’t pass their technical assessment for a development gig, which was rooted in a lot of game design and hard CS stuff I wasn’t good at.

So I was making documentation to help other people make iPhone apps.

Which was sort of the same but not really at all. I had narrowly dodged the bullet of running out of money but in the process, my career had stalled.

I still wanted to make iPhone apps. So I job hunted for months. I crafted a weak application to Y Combinator.

I didn’t get anywhere.

In a fit of frustration, I took to my blog to write a love letter to Hipmunk, then a new startup whose approach I really liked. The post got their attention and I got an email.

I parlayed this into an in-person meeting. I made it clear I wanted to build their app. Afterward, I sent a series of mockups to really make the point and stand out from the competition.

The Hipmunk guys went to my website and found product pages for four apps, with screenshots. They found positive reviews for those apps when they downloaded them from the store.

A couple days later, they gave me an offer. They never even asked for my resume.

The critical pieces here were a body of work and a strong showcase of that work. I had to connect the dots as completely as possible. I had to show that I had the experience and knew how to do the project from start to finish.

Hiring can be a big gamble to a young company. Once you have someone’s attention, reducing the perception of risk is critical to getting that first big break. Hipmunk rolled the dice on me and we had fun getting to the front page of the App Store several times during my tenure.

It was great to finally get in – and to have enough experience to really deliver once I got there.

That’s all a long way of saying it’s up to you

You have to decide what you want.

You have to figure out what’s between you and getting there.

You have to learn and build continuously.

You have to finish the projects you start.

You have to build relationships.

You – probably – have to assume some risk.

You have to do the hard work of demonstrating what you can do, in person and online.

You have to make hiring you a no-brainer, no matter how different your path into the work may seem.

There are probably other sequences and strategies you could follow that would get you where you want to go. This is just what I used to get what I wanted.

And now I make iPhone apps.

I hope you find the examples helpful on your own journey.

Good luck.

U.S. Court: Bloggers Are Journalists - Robinson Meyer - The Atlantic

$
0
0

Comments:"U.S. Court: Bloggers Are Journalists - Robinson Meyer - The Atlantic"

URL:http://www.theatlantic.com/technology/archive/2014/01/us-court-bloggers-are-journalists/283225/


Tech bloggers—who are also journalists—at an Instagram event last year (Lucas Jackson/Reuters)

One of the great questions of our time came closer to resolution last week, when a federal court ruled that bloggers are journalists—at least when it comes to their First Amendment rights. 

The Ninth Circuit ruled as such on Friday in Obsidian Finance Group v. Crystal Cox, a complicated case first decided in 2011. The court found that even though someone might not write for the “institutional press,” they’re entitled to all the protections the Constitution grants journalists.

Background That Is Not About Are Bloggers Journalists

In 2010, Crystal Cox—an “investigative blogger”—published a series of angry posts about Obsidian Finance Group and its partners, alleging tax fraud, money laundering, and other crimes. The posts appeared on a set of aptly (and memorably) named websites, including “obsidianfinancesucks.com.” Obsidian and one of its partners, Kevin Padrick, sued Cox, alleging defamation.

Only statements of apparent fact can be ruled defamation. When the case went to trial, Oregon district court Judge Marco Hernandez ruled that most of Cox’s entries were too hyperbolic to count as anything but opinion, and thus could not be considered defamation—except for one post, which the Oregon district decided was sufficiently factual. A jury awarded Obsidian and Padrick $2.5 million in damages for the libel.

The New York Times’s media reporter David Carr wrote about the case that year, ruling it less about journalism than Right and Wrong: “She didn’t so much report stories,” he said of Cox, “as use blogging, invective and search engine optimization to create an alternative reality.”

Other things were going on in the case. Cox claimed that her sources for the tax fraud claim were secret, and that Oregon’s media shield law protected her from revealing them. Hernandez decided that she did not qualify for shield protection under the law, partly because she had offered to take down the offending posts for $2,500 per month.

But this new appeal ruling, the one on Friday, turned on something else—the intersection of two pre-existing piece of case law, New York Times Co. v. Sullivan and Gertz v. Robert Welch, Inc. Both dictate what kinds of speech qualify as defamation.

In the landmark 1964 Sullivan, the Supreme Court ruled that public figures can only seek claims for defamation if false information was published with “actual malice.” In 1974’s Gertz, meanwhile, the same court ruled that false information about private individuals qualified as defamation if it was negligently published.

Taken together, the two cases establish a meshing precedent: To count as defamation, false information about public figures must be published with malign intent. False information about private figures, meanwhile, must merely be published negligently.

Cox claimed that Obsidian and its partners were public figures, an assertion the Ninth Circuit nixed. Writing for the court, Judge Andrew Hurwitz said that her posts, while about private figures, covered a topic of public concern. They fell, he said, under the domain of Gertz. The information contained in them could not be merely wrong: It had to be negligently published.

Crucially, the jury in the 2011 trial, Hurwitz said, had never been informed of such a stipulation. 

The Bloggers and Journalists Part

Cox might not qualify for Gertz’s protections if she was not part of a media organization. If Cox is a blogger, not a journalist, and if only journalists are entitled to the protections of negligent publications, then Cox might not qualify for Gertz at all.

Was Cox, a self-titled blogger, in fact a journalist? On this, Hurwitz was clear.

“Although the Supreme Court has never directly held that the Gertz rule applies beyond the institutional press, it has repeatedly refused in non-defamation contexts to accord greater First Amendment protection to the institutional media than to other speakers,” he wrote. In one case, he said, “the Court expressly noted that ‘we draw no distinction between the media respondents and’ a non-institutional respondent.”

Hurwitz goes on, extending journalistic protections to all those liberated of their institutions:

The protections of the First Amendment do not turn on whether the defendant was a trained journalist, formally affiliated with traditional news entities, engaged in conflict-of-interest disclosure, went beyond just assembling others’ writings, or tried to get both sides of a story. As the Supreme Court has accurately warned, a First Amendment distinction between the institutional press and other speakers is unworkable: “With the advent of the Internet and the decline of print and broadcast media . . . the line between the media and others who wish to comment on political and social issues becomes far more blurred.”

So bloggers—even slimy ones—are, at least legally, journalists. Cox’s case will get a new trial in Oregon’s district court, and the jury will be appropriately informed of the Gertz rule. Perhaps the award of damages will be reduced.

And we, those following the case at home, can change into our pajamas, order pizza to our various apartments, and blog away. We will not just be bloggers—we will be, according to the law, journalists.

CarWoo!

inessential.com: Network Solutions Auto-Enroll: $1,850

$
0
0

Comments:"inessential.com: Network Solutions Auto-Enroll: $1,850"

URL:http://inessential.com/2014/01/21/network_solutions_auto-enroll_1_850


I got an email from Network Solutions — where I still have two domains, originally registered in the ’90s — that informed me I have been enrolled in their WebLock Program.

To help recapture the costs of maintaining this extra level of security for your account, your credit card will be billed $1,850 for the first year of service on the date your program goes live. After that you will be billed $1,350 on every subsequent year from that date. If you wish to opt out of this program you may do so by calling us at 1-888-642-0265.

(Here’s a screenshot of the email. I’ve pasted in the text at the end of this post.)

I couldn’t believe that I’d been opted-in, without my permission, to any new product — and I was stunned when I saw how much it cost. And further surprised when I saw that I would have to make a phone call to deal with all this.

I found on Twitter that Network Solutions has an account. So I asked if this was phishing. The reply: it’s real.

My next step was, of course, to post a screen shot of the email on Twitter.

@netsolcares responded by suggesting I call the security team.

@brentsimmons Give the security team a call and they can explain 1-888-642-0265 Sorry for the inconvenience. ^rr

I’m not going to call the security team. (Which I’d bet is really a sales team.) I’ve been a customer of Network Solutions since 1997. While their website was always kind of a pain, I’ve never had problems with them.

But this goes way beyond acceptable behavior for any company I do business with. It’s extreme.

So I’ll be transferring those domains elsewhere. (Folks have recommended Hover and Dynadot.)

Text of the email

Dear Brent Simmons,

Cybercriminals continue to strengthen and evolve the techniques and tools they use to assault our customers' websites and domain names. According to the Symantec Internet Security Annual report the pace and frequency of hacking, phishing and social engineering has increased over 42% in 2013.

Compromised domain names can lead to substantial brand, reputational and financial damage. Once a domain name is compromised cybercriminals have total control of the content that appears on "your" website. In many cases objectionable content is posted or phishing sites are established whereby your customers' private information can be exploited.

At Network Solutions we take your security very seriously. We deploy some of the most advanced security monitoring and defense mechanisms in the industry to ensure only authorized users can access your company's domain name and name servers. Given the level of traffic to your website, we are taking another significant step to protect your domain name security.

Starting 9:00 AM EST on 2/4/2014 all of your domains will be protected via our WebLock Program. Here is how the program works:

  • In order to make changes to your Domain Name's configuration settings you must be pre-registered as a Certified User.
  • All requests for Domain Name configuration changes must be confirmed by an outbound call we make to a pre-registered authorized phone number you establish. A unique 9 digit PIN will be required when we call.
  • A message alert will be sent to all Certified Users notifying the team which Certified User has made the request.
  • In addition WebLock enrolled customers will have access to a 24/7 NOC and rapid response team in the event of any security issues.

To establish Certified Users and pre-register authorized phone numbers and email addresses please call 1-888-642-0265 Monday to Friday between 8:00AM and 5:00PM EST. Please make sure to establish Certified Users with authorized phone numbers and email addresses before launch date. Once established, the unique 9-digit PINs for each certified user will be mailed to you within 45-days.

To help recapture the costs of maintaining this extra level of security for your account, your credit card will be billed $1,850 for the first year of service on the date your program goes live. After that you will be billed $1,350 on every subsequent year from that date. If you wish to opt out of this program you may do so by calling us at 1-888-642-0265.

We strongly encourage you to take advantage of this security program and register Certified Users before the program launch date. Thank you for helping us protect you better.

Regards
Geof Birchall

Geof Birchall
Chief Security Officer
Network Solutions

Connect With Us

Please do not reply to this email. Replying to this email will not secure your services. Please click here to unsubscribe. Please note that unsubscribing from our marketing emails will not affect important transactional correspondence such as administrative and renewal notices related to your account. Please review our Privacy Notice for any questions related thereto and please see our Services Agreement for the terms and conditions governing Network Solutions products and services.

©2014 by Network Solutions, LLC. All rights reserved.
12808 Gran Bay Parkway, West | Jacksonville, FL 32258
Network Solutions is a Web.com Group, Inc. company.

bash - Have I tattooed a syntax error on my arm? - Stack Overflow

$
0
0

Comments:"bash - Have I tattooed a syntax error on my arm? - Stack Overflow"

URL:http://stackoverflow.com/questions/21186724/have-i-tattooed-a-syntax-error-on-my-arm?newsletter=1&nlcode=4592%7Cdedb


A few months ago I tattooed a fork bomb on my arm and I skipped the whitespace because I think it looks nicer without it. But to my dismay, sometimes(not always) when I run it in a shell it doesn't start a fork bomb but just gives a syntax error.

bash: syntax error near unexpected token `{:'

Yesterday it happened when I tried to run it in a friends bash shell and then I added the whitespace and it suddenly worked, :(){ :|:& };: instead of :(){:|:&};:

Does the whitespace matter, have I tattooed a syntax error on my arm?!

It seems to always work in zsh, but not in bash.

Edit:A related question does not explain anything about the whitespaces, which really is my question; why is the whitespace needed for bash to be able to parse it correctly?

MakeGamesWithUs Summer Academy receives 1,000 applications in first month

$
0
0

Comments:" MakeGamesWithUs Summer Academy receives 1,000 applications in first month "

URL:https://www.makegameswith.us/gamernews/338/makegameswithus-summer-academy-receives-1000-appl


A little over a month ago we quietly launched our new Summer Academy, a 2 month in-person course where students will design, code and ship their own original iPhone game. In the first month, we received 1,000 applications (400 final, 600 drafts). This year we’ll have 200 spots for the Summer Academy meaning we’re on pace to be as selective as top colleges in our inaugural class.

I see a day when the traditional four-year college degree will be replaced… which is where [MakeGamesWithUs] comes in. - San Jose Mercury News

The Summer Academy is a spinoff of our wildly popular internship program where we had 75 college and high school students building games out of our living room. Between the two years we’ve run the internship, and the courses at MIT and UC Berkeley created off our curriculum over 200 students have taken our in-person courses. And thanks in part to being featured in the Hour of Code, over half a million students have started learning to build iPhone games on our website!

@MakeGamesWithUs Thank you! I've never seen this student engaged or interested in doing ANYTHING related to school. Niche FOUND! #HourofCode - Sherry Gick, Teacher

As early as last August we started receiving interest for an in-person program next summer, and it became clear there was huge demand for a formal in-person course. Last year we had students move out from Chicago and Mexico City for our summer program, and this year we've already admitted applicants from as far away as India and Australia!

Our past graduates have gone on to do amazing things. Sophie Sheeline and Madelynn Taylor won a fellowship and presented their game ER Rush at the White House!! Yerin Kim and Kate Lee used our curriculum to teach a class at UC Berkeley. Other graduates have gone on to intern at both big tech companies like Square and startups like Flutter (acquired by Google).

My MGWU experience helped me get a job at Square this past summer. Career fair recruiters were impressed when I demoed my game! - Katie Siegel, MIT ’16

We've been very impressed by the quality and diversity of this year's applicants, from a high school student who's already shipped 28 apps to a technical advisor to the Navy who has a masters degree. They've also built impressive products in the offline world, one founded a company that manufactured and sold wheat crisps, another constructed a fully functioning air conditioner from scratch. We're incredibly excited by the response to the Summer Academy so far and we're looking forward to a terrific inaugural class! So what are you waiting for? Apply now!


Outbox Blog • Outbox is Shutting Down--A Note of Gratitude

$
0
0

Comments:"Outbox Blog • Outbox is Shutting Down--A Note of Gratitude"

URL:http://blog.outboxmail.com/post/74086768959/outbox-is-shutting-down-a-note-of-gratitude


Dear loyal customers and Outbox supporters: 

We announce today that we are ending the mail service, shutting down the Outbox brand, and focusing our team and resources on a totally new product.  In a related post we explain what this means for our current, loyal customers.  

This is bittersweet for our team, since we have poured so much into creating our product and serving our customers. Yet this announcement marks the end of a chapter for us—not the end of the story. 

Over the past two years, we have been humbled by the support of our investors and customers, who each took a leap of faith to join us in this exceptionally audacious venture. We have also been humbled by the incredible personal and financial sacrifices our team members have made along the way.

The Beginning
We set out to redefine a long cherished but broken medium of communication: postal mail. We did so during a tumultuous period in the history of the United States Postal Service (USPS), which has experienced declining mail volume and staggering deficits for the past five years. 

As former staffers on Capitol Hill, we share a passion for tackling huge public problems, but aim to do so with private solutions. We knew that the USPS would not be able to work out its own problems so, perhaps naively, we hoped to partner with USPS to provide an alternative to the physical delivery of postal mail to a subset of users, hoping this would spur further innovation and cost savings. 

Although an early test with the USPS that let users redirect their mail to us showed signs of success and operational simplicity, an interview by CNBC triggered a request from the Postmaster General himself to meet in Washington, DC. In one of the most surreal moments of our lives (listen to it in an NPR interview), we had our very own Mr. Smith Goes to Washington encounter where the senior leadership of USPS made it clear that they would never participate in any project that would limit junk mail and that they were immediately shutting down our partnership.  This 30-minute meeting was the end of our business model. 

The Reimagining of Outbox
After countless hours applying Clay Christensen’s business model theories to our situation, we came to view our failed partnership with USPS as a David and Goliath moment: we believed our seeming disadvantage would become our greatest strength. Turning our original vision on its head, we reimagined our service as not merely playing in someone else’s value channel, but as a new type of last-mile delivery channel all together: one subsidized by our users in return for collecting and electronically delivering their postal mail. If we could simply break even on the mail business, we would have built a valuable last mile network able to be monetized in many ways. 

To pull this off, we built a world-class team of engineers, designers, marketers, and operations specialists in Austin and San Francisco.  Funding our efforts were some of the most celebrated investors of our generation: Mike Maples at Floodgate and Peter Thiel and Brian Singerman at Founders Fund, as well as a groundswell of investors via AngelList. Together, we made a product that was as beautiful as it was complex, and overcame nearly every obstacle in our path. 

We created our own dynamic logistics software, developed a legal framework to open users’ mail, built industrial-grade scanning machines for 1/100th of the market price, developed specialized OCR to allow customers to unsubscribe from postal mail, built and attached to our cars 5-foot mailbox flags that withstand 70 mph highway speeds, laser cut wood blocks to build mail slot solutions, and created a novel system of key decoding via photograph that inspired the creation of one startup all on its own. All this was simply the backend of our service, and our iPhone and other apps won awards for their design and elegance.

In the end, we serviced a little over 2,000 individual customers, had 25,000 people waiting around the country on our waiting list, unsubscribed our customers from over 1 million senders of mail, scanned over 1.5 million pages, and delivered over 250,000 requested mail packages. We also recycled approximately 30 tons of paper, enough to cover 86 football fields.

Outbox was buzzing.  It seemed as though everyone knew something about our little company, had seen one of our red-flagged mailbox cars, or had stumbled upon a news story about us.  CNN praised us, Jay Leno mocked us, and Pee Wee Herman called us “the future.” We tested our anecdotal suspicions with a nationwide survey, and found that Outbox had an unaided brand awareness of 10.1 percent - even though we serviced a mere 2,000 customers in two relatively small markets.

Numbers Don’t Lie
After raising $5m in June of last year, we set out to onboard the 4,000 individuals we had amassed on our central-San Francisco waitlist. We projected converting a large percentage of these individuals, and planned to scale our marketing efforts at a projected cost of $20 per acquisition.

However, after an extensive email marketing campaign to our waitlist, total yield from the waitlist was under 10 percent. And as we started marketing outside of this network, we had difficulty finding a repeatable and scalable acquisition channel. Across all of our efforts, our acquisition numbers were over $50 per lead. 

As our marketing efforts lagged behind schedule, our density numbers remained consistently flat, causing us to spend about double our projected cost to service each customer. Even our most dense routes cost us approximately 20 percent more than our break-even target.

After several months of testing and refining, we reasonably concluded that we were executing well and collecting good data—it told us that there wasn’t enough demand to support the cost model.  Our monthly operating deficits were too high, and even though we continued to get better at acquisition, each small success actually saw our cash curve decline further because our density remained flat. For longer than we would be willing to tolerate, we would lose money for each additional customer we gained. Despite the massive interest in our company, we learned that the product we built did not find fit in the market we targeted. 

Finding serenity in knowing when to stop 
For startups, it’s difficult to know when to throw in the towel. Indeed, the main strategy for most of the life of a startup is overcoming impossible odds, and we built a team that did that over and over again. 

This final challenge—product market fit—is one we ran after with characteristic zeal. Amidst these struggles we were reminded of the serenity prayer written by one of our favorite authors, Reinhold Niebuhr: 

Grant me the serenity to accept the things I cannot change,
The courage to change the things I can,
And wisdom to know the difference.

Facing situation after situation we kept the courage to change them; in these final few months, we were granted the serenity to know this situation is one we cannot change.

Our Unpostmen
Not least of our accomplishments was the recruitment of an amazing operations team. Their tireless work ethic led to such stellar customer service that our users would, on occasion, pee their pants. The hardest reality in our transition is saying goodbye to our eighteen Unpostmen: they have been the frontlines of challenging logistics and extreme customer service. If you are hiring and looking for loyal, hard working operations experts, please email us here. 

Our Future
There are numerous small pivots we could make as an alternative to our mail service, and we have tested many other applications that could be layered on top of the logistic network we already created. Yet each of these tangential services has a tragic combination of being costly to pull off and, well, not a big idea. 

While it saddens us that we are leaving so much behind in terms of what we built and developed for mail, we are equally excited to begin a new chapter in our company’s life. Our team has been working on a new product that has already shown signs of success, and we believe it has the opportunity to be massively disruptive.  We can’t wait to tell you more about it—but for now we’re in stealth mode. 

Our many learnings have led us to tackle a problem in many ways similar:  a giant, sleepy industry that serves every American, is generally hated, and is in need of radical new solutions that involve hardware, software, and logistics. 

Forefront in our minds are the learnings from the wild adventure of Outbox:

  • Giant, complex systems appear insurmountable, but aren’t—they were built by people just like you and me
  • The main asset the government (and big companies) has is time—which is the resource of which startups have the least. 
  • You may think government organizations are completely, insanely backwards; you are wrong—they are worse. 
  • If you can’t find a hardware solution to your needs, build it—it’s not that hard. 
  • Doing extraordinary things for customers is time consuming and hard—but very worthwhile.   
  • Life is too short to pursue anything other than what you are most passionate about. 


For the last two years, we got out of bed every morning because of the chance to re-imagine a daily activity for every American.  We’ve had many sleepless nights these last few months, but are excited to be turning our complete attention to a new, equally compelling reason to wake up each morning. 

With thanks,
Will & Evan, Outbox Cofounders

Bug #9424: ruby 1.9 & 2.x has insecure SSL/TLS client defaults - ruby-trunk - Ruby Issue Tracking System

$
0
0

Comments:"Bug #9424: ruby 1.9 & 2.x has insecure SSL/TLS client defaults - ruby-trunk - Ruby Issue Tracking System"

URL:https://bugs.ruby-lang.org/issues/9424


Ruby 1.9, 2.0, and 2.1 use insecure defaults for SSL/TLS client connections. They have inherited or overridden configs that make the OpenSSL-controlled connections insecure. Note: both OpenSSL's and Ruby's defaults in all tested versions are currently insecure. Confirmation of the issues with Ruby's TLS client can be done with the code in [1].

Ruby is using TLS compression by default. This opens Ruby clients to the CRIME attack[2].

Ruby also uses a variety of insecure cipher suites. These cipher suites either use key sizes much smaller than the currently recommended size, making brute forcing a decryption easy, or do not check the veracity of the server's certificate making them susceptible to man-in-the-middle attacks[3][4].

Ruby also appears to allow SSLv2 connections by default. It does so by first trying to connect with a SSLv2 client hello with a higher SSL/TLS version inside of it which allows SSLv2 servers to work. SSLv2 was broken in the 1990s and is considered unsafe.

These issues expose Ruby users to attacks that have been known for many years, and are trivial to discover. These defaults are often build specific, and are not the same across platforms, but are consistently poor (the code in [1] can evaluate the build). A patch from a core developer on the security@ list is attached. However, the patch does not correct the suspect SSLv2 configuration. It is believed that Ruby 1.8 is also a concern, but, since it was obsoleted, it's not been investigated.

A report similar to this was sent to security@ruby-lang.org four days ago. The Ruby core developers have been unable to patch these problems in a timely manner for it for what I and others believe are concerning reasons. This ticket is being made to allow engineers outside of the small group that are on security@ to protect themselves from these attacks.

[1] https://gist.github.com/cscotta/8302049
[2] https://www.howsmyssl.com/s/about.html#tls-compression
[3] https://www.howsmyssl.com/s/about.html#insecure-cipher-suites
[4] TLS_DHE_DSS_WITH_DES_CBC_SHA - small keys
TLS_DHE_RSA_WITH_DES_CBC_SHA - small keys
TLS_ECDH_anon_WITH_3DES_EDE_CBC_SHA - MITM
TLS_ECDH_anon_WITH_AES_128_CBC_SHA - MITM
TLS_ECDH_anon_WITH_AES_256_CBC_SHA - MITM
TLS_ECDH_anon_WITH_RC4_128_SHA - MITM
TLS_RSA_WITH_DES_CBC_SHA - small keys
TLS_SRP_SHA_WITH_3DES_EDE_CBC_SHA - MITM
TLS_SRP_SHA_WITH_AES_128_CBC_SHA - MITM
TLS_SRP_SHA_WITH_AES_256_CBC_SHA - MITM

Chris Hates Writing • Today my startup failed

$
0
0

Comments:"Chris Hates Writing • Today my startup failed"

URL:http://chrishateswriting.com/post/74083032842/today-my-startup-failed


Today my startup failed

No soft landing, no happy ending—we simply failed.

It’s been a long four year journey, full of highs and lows. I am simultaneously incredibly proud, and incredibly disappointed.

I’m incredibly proud of an amazing team and all that they have accomplished. Our most recent product, DrawQuest, is by all accounts a success. In the past year it’s been downloaded more than 1.4 million times, and is currently used by about 25,000 people a day, and 400,000 last month alone. Retention and engagement are great. And yet we still failed.

It may seem surprising that a seemingly successful product could fail, but it happens all the time. Although we arguably found product/market fit, we couldn’t quite crack the business side of things.

Building any business is hard, but building a business with a single app offering and half of your runway is especially hard (we created DrawQuest after the failure of our first product, Canvas). I’ve come away with new found respect for those companies who excel at monetizing mobile applications. As we approached the end of our runway, it became clear to us that DrawQuest didn’t represent a venture-backed opportunity, and even with more time that was unlikely to change.

I’m terribly saddened that this may spell the end for our wonderful community, but it’s my goal to use what little money we’ll have left after the wind-down to keep the service alive for another few months. However as of today the team has gone their separate ways, and our doors are effectively closed.

I’m disappointed that I couldn’t produce a better outcome for those who supported me the most—my investors and employees. Few in business will know the pain of what it means to fail as a venture-backed CEO. Not only do you fail your employees, your customers, and yourself, but you also fail your investors—partners who helped you bring your idea to life.

In my case, I am extremely lucky and grateful to be partners with people who are simply the best. What separates the best investors is not how they help you when you’re a rocketship, but when your ship is on fire and you’re venting atmosphere. In this case, our investors have demonstrated what sets them apart from the rest—they’ve supported me throughout the ups and downs, and especially the downs.

With that said, life goes on, and the best path forward is not a wounded one, but a more learned and motivated one. I’m definitely not itching to start another company any time soon—it will take time to decompress and reflect on the events of the past four years—but I hope that if I do some day decide to pursue a new dream, I’ll be in a much better position to. After all, I did just receive a highly selective, four-year education for a mere $3.6 million dollars! (I find humor helps as well.)

As for what’s next, I honestly have no idea. This is the first time in four years I’ve been at a crossroads like this.

One thing I’ll be doing more of is writing about my experience. Partially because it’s therapeutic, but also because if there’s a silver lining in all of this (and there is), it’s that I can help educate others about a path fraught with hardship, but rewarding nonetheless.

I’m also particularly inspired by what Everpix has done by making so much of their story public, and hope many others will follow in their footsteps of radical transparency. I don’t wish to glorify my failure, but it’s certainly something I’d rather embrace than hide behind for the next five years.

On that note, if there are any specific topics you’d like me to write about, feel free to submit them via my Ask page. Or if you’d like to say hi in general, please don’t hesitate to e-mail me at moot@4chan.org. Believe me—I’ll have a lot of free time on my hands these next few weeks.

Last but not least, to my team—Dave, Eunsan, Alex, Nick, Shaun, and Jim—I am eternally grateful. To my investors—Union Square Ventures, Andreessen Horowitz, Lerer Ventures, SV Angel, Founder Collective, Chris Dixon, and Joshua Schachter—I could not have done this without you.

And to everyone who has supported us over the past few years, from the bottom of my heart: thank you.

With Traction But Out Of Cash, 4chan Founder Kills Off Canvas/DrawQuest | TechCrunch

$
0
0

Comments:"With Traction But Out Of Cash, 4chan Founder Kills Off Canvas/DrawQuest | TechCrunch"

URL:http://techcrunch.com/2014/01/21/when-goods-not-good-enough/


“There’s a lot of glorification of startups and being a founder. People brush the failures under the rug, but that’s the worst thing you can do. You kind of have to face it head on,” says moot aka Christopher Poole. So rather than raise more money for his remix artist community Canvas and game DrawQuest, later today he’ll announce they’re closing. “No soft-landing, no aqui-hire, just ‘shutting down’ shutting down.”

[Update: DrawQuest and Canvas have now published blog posts confirming this article and telling their users what's going on. Moot has also penned his own eulogy for his startup, and will be writing more in the future in hopes of educating other entrepreneurs.

In a touching part of his post-mortem, moot opens up saying "Few in business will know the pain of what it means to fail as a venture-backed CEO. Not only do you fail your employees, your customers, and yourself, but you also fail your investors—partners who helped you bring your idea to life."]

What’s different about this trip to the deadpool is that DrawQuest was actually doing relatively well. Launched a year ago to inspire people to take on daily bouts of creativity through drawing challenges, it reached 1.4 million downloads, 550,000 registered users, 400,000 monthly users, 25,000 daily users, and 8 million drawings.

“We’re doing better than 98% of products out there, especially in the mobile space.” says moot, but he admits that traction is ”Shy of that all important million (monthly users). Where we failed basically was one: to crack our growth engine. But importantly, we were never able to crack the business side of things in time.”

Perhaps if DrawQuest was the plan all along, it could have survived long enough to grow and monetize, but it was on a short fuse. Moot originally raised a $625,000 seed round led by Lerer Ventures in May 2010 to start DrawQuest’s predecesor Canvas, a media-centric forum where people could post, remix, and discuss visual Internet art. Then he raised $3 million more in June 2011 in a Series A led by Union Square Ventures’ Fred Wilson and joined by SV AngelLerer VenturesAndreessen HorowitzFounder Collective, and Joshua Schachter.

It wasn’t until February 2013 that DrawQuest launched, and that tardy pivot left moot lagging far behind where he needed to be. “We built this app with less than half of our runway remaining. You have to do twice as much with half as much time. It’s really freaking hard.” For seed stage companies it might be easier, but proving you’re worth the valuation of a Series B upround requires incredible metrics that are tough to reach if you have audible late in the game. ”People trivialize pivoting but it’s truly a hail mary, and it’s rare that people can pull this off.”

DrawQuest got some traction, but found that selling paint brushes in a drawing app is a lot harder than selling extra lives in Candy Crush. There’s just not the same emotional ‘I can’t play if I don’t pay’ urgency. “I definitely have a new appreciation for game designers,” moot tells me.

With Canvas/DrawQuest’s headcount incurring serious costs, moot searched for someone to acquire his startup. “We approached a few companies and no one was buying what we were selling. [We were] never trying to win any awards with our brushstroke algorithms, so from an IP standpoint [there wasn't much to buy]. The nut that was interesting was the community, but it wasn’t really clear what exactly this community would do for their business.”

After running “Wild West of the Internet” image-sharing site 4chan since 2003, moot was actually looking forward to not being the head honcho for once. “I thought we were doing great work and we could continue to do great work as part of a bigger organization. I had kind of psyched myself up for that, but then…” no deal materialized.

“Ultimately we decided we wouldn’t go try to raise more money – it wasn’t really on the table because we just hadn’t created enough value” says moot. That’s a rare admission of failure in the success theater startup. Most founders trumpet their funding rounds and growth milestones but slink away when things go pear-shaped. Poole’s willingness to be humble and transparent is admirable, and could increase willingness of investors to back his future projects.

So today he’ll announce that Canvas is shutting down in the next few days, and users will get an email with a link to download all their content.

As for DrawQuest, moot says “I’m going to try to keep the servers up as long as I can. As of today all of the company’s employees are going their separate ways…but I’m hoping that between in-app purchases and whatever money is in the bank we’d be able to keep the service alive for a bit longer. We think it makes sense to pay our AWS bill until we’re completely out of money which will hopefully be a few months.” Perhaps even longer as moot dreams that maybe “some white knight comes in and says ‘I want to chip in for the server costs’.”

In DrawQuest’s goodbye post, moot writes “We hope you’ll all continue to spread the importance of daily creativity, and inspire those around you to draw more often. While DrawQuest may not be around next year, you all will be, and we hope you’ll leave the world a better, more creative place.”

And as for moot himself?:

“I’m a free agent for the first time in over 4 years because I was in college when I dropped out to start this comapny. I’m definitely not trying to start another company anytime soon. I need to decompress and refelect on what I’ve learned and take some time to myself because it’s been a bit of an emotional rollecoaster. You start to appreciate why the best investors are the best investors. In our final hour everyone was so supportive. It’s made the difference between me being an emotional wreck and me being in as good of a place emotionally as you can be when you fail.  Most companies fail, and unfortunately we are one of those companies. Those are the odds.”

New Year, New CEO for GitHub · GitHub

$
0
0

Comments:"New Year, New CEO for GitHub · GitHub"

URL:https://github.com/blog/1761-new-year-new-ceo-for-github


It's a brand new year, and each year calls for reflection on where we've been, where we're going, and how each of us here at GitHub can best focus our talents and energy. To kick off 2014 I've asked my long-time friend and GitHub cofounder, Chris Wanstrath, to take the role of CEO. In this role, Chris will be responsible for leading the company, defining our vision, and working with our amazing team to establish and execute the strategies necessary to achieve our most ambitious goals.

I'll continue to work closely with Chris on vision, strategy, and execution in the role of President of GitHub. This shift will allow me to take responsibility for R&D and new growth opportunities within the company. I'll also be thinking deeply about how we can continue to optimize for happiness as we grow, and will remain the company's public champion and primary spokesperson.

We tend to do things differently here at GitHub, and remaining fluid in how we define our roles is a big part of that. In fact, Chris and I have stepped into these roles over the past few months and today we're simply acknowledging the change publicly. While we don't use titles heavily at GitHub, we think in this case they're useful to communicate areas of responsibility both internally and externally.

2014 is going to be an exciting year. I, for one, can't wait to see what happens!

Tom Preston-Werner
Cofounder & President, GitHub, Inc.

Need help or found a bug? Contact us.

Viewing all 9433 articles
Browse latest View live