Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

thinkbroadband :: Sky parental controls break jquery website

$
0
0

Comments:"thinkbroadband :: Sky parental controls break jquery website "

URL:http://www.thinkbroadband.com/news/6261-sky-parental-controls-break-jquery-website.html


The Sky parental controls system recommends that customers leaving on the phishing and malware protection even if disabling all the other elements, but as of 22:30 on Sunday 26th January 2014 this is leading to all HTTP access to code.jquery.com being blocked for Sky customers with the parental controls returning an access denied page.

code.jquery.com may not sound like a mainstream website, as it is really aimed at web and javascript developers but it is pretty common for websites to link to the released javascript (.js) files for jquery and a host of other tools on jquery.com as the site is a CDN for these files, the result being that it is possible many sites may not be performing as expected today.

The advice appears to be for Sky customers to log into their web account, and in the Sky Broadband Shield section turn off the Phishing/Malware filter, or alternatively disable the shield completely.

We will update as and when we hear of any changes and why code.jquery.com has been blocked, i.e. how did it end up categorised as a phishing domain.

Update 8:45am Sky still appears to be blocking code.jquery.com and all files served via the site, and more worryingly is that if you try to report the incorrect category, once signing in on the Sky website you an error page.

We suspect the site was blocked due to being linked to by a properly malicious website, i.e. code.jquery.com and some javascript files were being used on a dodgy website and every domain mentioned was subsequently added to a block list.

For any webmasters affected by the blocking, the solution is to switch to a locally hosted copy of the file, or utilise another CDN e.g. Google jquery CDN.

Update 9:45am It appears that the jquery CDN is unblocked once more on Sky connections where the phishing filter that is part of the parental controls system was enabled.

Update 5:30pm Our enquiry with Sky has elicited a response"“JQuery was temporary blocked this morning having been misclassified. Our review process kicked in shortly afterwards and the site was unblocked just over an hour later." Sky also note they identified the problem at 8:30am, and resolved it by 9:45am. As for why code.jquery.com was categorised as Malware nothing has been confirmed but Sky are talking to Symantec who provide the contest list and updates.

Norton ConnectSafe sounds like a simplified system of what Sky has implemented but since the free service requires you to just change your DNS servers it may prove useful for webmasters as a way to check whether their site is blocked.


Texpad » blog » Texpad iOS 7 UI reboot

24h Android sniffing using tcpdump | dornea.nu

OCW Bookshelf | Open Matters

$
0
0

Comments:"OCW Bookshelf | Open Matters"

URL:http://mitopencourseware.wordpress.com/ocw-bookshelf/


MIT OpenCourseWare shares the course materials from classes taught on the MIT campus. In most cases, this takes the form of course documents such as syllabi, lecture notes, assignments and exams.

Occasionally, however, we come across textbooks we can share openly. This page provides an index of textbooks (and textbook-like course notes) that can be found throughout the OCW site.  (Note that in most cases, resources are listed below by the course they are associated with.)

Like this:

LikeLoading...

Floorplan Light Switches - jnd.org

$
0
0

Comments:"Floorplan Light Switches - jnd.org"

URL:http://www.jnd.org/dn.mss/floorplan_light_swit.html


Floorplan Light Switches

Once upon a time, a long time ago, I got tired of light switches that contained a long, one-dimensional linear array of switches mounted on a vertical wall controlling a two-dimensional placement of lights that were placed on a horizontal plane.  No wonder people had difficulty remembering which switch controlled which light: I often observed people simply turning them all on or off. 

See the figure for a typical, confusing linear array of light switches. (This is figure 4.4 of my book, the Design of Everyday Things, revised edition, 2013: DOET.)

Why not solve the problem by arranging the light switches in what I have called a "natural mapping" between the controls and the lights. Arrange the switches in the same spatial configuration as the lights, and then mount the switches on the same spatial plane as the lights. Because switches are usually mounted on a vertical wall, whereas lights are usually located on a horizontal plane, either standing on the floor or on tables or mounted on the ceiling, the switches should be placed on a horizontal plane.

Why not arrange the switches on a floor plan of the space, so it would be easy to determine which switch worked which light? I built two such arrays of switches, one for my home and one for my research laboratory.  I used a sloping plane rather than horizontal to make it easier to mount (the slope reduced the extruding surface and made the plan easier to read, but having a slope made it easier to understand the orientation). I published the story and the results in the first edition of The Design of Everyday Things (then called Psychology of Everyday Things) 25 years ago. In 1988.

See the image below which shows the floor plan, the close to horizontal plane for the switches, and the "You are here" X on the floor plan top help people orient themselves. (This is Figure 4.5 of DOET, 2013.) 

When I revised the book, I repeated the story and asked why nobody had ever followed up on the suggestion? I gave the story of my failed attempt to convince the CEO of a smart-home company to pay attention to the idea.

Ilja Golland, a German reader of DOET, just sent me the URL for a "floor plan light switch" designed by Taewon Hwang (evidently in 2011).  Hwang is identified by one website as a "Civil Engineer at Hyundai Asan, S. Korea." See the figure.

Hwang seems to have been unaware of my earlier example, and he only solved half the problem: he still mounts his switches on a vertical surface. His switches seem to be only meant as a concept sketch (much as mine), and I can find no further information about either Hwang or commercial production of the switch.  if some commercial company does decide to adopt it, I hope they place the switches on a horizontal plane (or at least a sloping one). They won't be able to patent the idea: my public disclosure in 1988 means there is prior art. Actually, the publicity surrounding Hwang's designs thereby makes it unpatentable (at least in the United States).

(Do a web search for "floorplan light switch" and see the publicity his design received.)

I'm delighted that someone else seems to have independently thought of the idea (half the idea). In fact, not only am I disappointed that no commercial maker of electrical panels and light switches has thought of following up on this idea, but that no other people have "independently" invented it. 

Lessons from the World's Most Tech-Savvy Government - Sten Tamkivi - The Atlantic

$
0
0

Comments:"Lessons from the World's Most Tech-Savvy Government - Sten Tamkivi - The Atlantic"

URL:http://www.theatlantic.com/international/archive/2014/01/lessons-from-the-worlds-most-tech-savvy-government/283341/


An Estonian shares his country's strategy for navigating the digital world.

People wave Estonian national flags during a concert in Tallinn, in August 2011. (Ints Kalnins/Reuters)

Lately, I have been getting a lot of questions about Healthcare.gov. People want to know why it cost between two and four times as much money to create a broken website than to build the original iPhone. It’s an excellent question. However, in my experience, understanding why a project went wrong tends to be far less valuable than understanding why a project went right. So, rather than explaining why paying anywhere between $300 million and $600 million to build the first iteration of Healthcare.gov was a bad idea, I would like to focus attention on a model for software-enabled government that works and could serve as a template for a more effective U.S. government.

Early in my career as a venture capitalist, we invested in Skype and I went on the board. One of the many interesting aspects of Skype was that it was based in Estonia, a small country with a difficult history. Over the centuries, Estonia has been invaded by many countries including Denmark, Sweden, Germany, and, most recently, the Soviet Union. Now independent but well aware of their past, the Estonian people are humble, pragmatic, and proud of their freedom, but dubious of overly optimistic forecasts. In some ways, they have the ideal culture for technology adoption: hopeful, yet appropriately skeptical.

Supported by this culture, the Estonian government has built the technology platform that everyone wishes we had here. To explain how they did it, I asked an Estonian and one of our Entrepreneurs in Residence, Sten Tamkivi, to tell the story. His response is below.

— Ben Horowitz, co-founder and partner of the venture capital firm Andreessen Horowitz

***

Estonia may not show up on Americans’ radar too often. It is a tiny country in northeastern Europe, just next to Finland. It has the territory of the Netherlands, but 13 times less people—its 1.3 million inhabitants is comparable to Hawaii’s population. As a friend from India recently quipped, “What is there to govern?”

What makes this tiny country interesting in terms of governance is not just that the people can elect their parliament online or get tax overpayments back within two days of filing their returns. It is also that this level of service for citizens is not the result of the government building a few websites. Instead, Estonians started by redesigning their entire information infrastructure from the ground up with openness, privacy, security, and ‘future-proofing’ in mind.

The first building block of e-government is telling citizens apart. This sounds blatantly obvious, but alternating between referring to a person by his social security number, taxpayer number, and other identifiers doesn’t cut it. Estonia uses a simple, unique ID methodology across all systems, from paper passports to bank records to government offices and hospitals. A citizen with the personal ID code 37501011234 is a male born in the 20th century (3) in year ’75 on January 1 as the 123rd baby of that day. The number ends with a computational checksum to easily detect typos.

For these identified citizens to transact with each other, Estonia passed the Digital Signatures Act in 2000. The state standardized a national Public Key Infrastructure (PKI), which binds citizen identities to their cryptographic keys, and now doesn’t care if any Tiit and Toivo (to use common Estonian names) sign a contract in electronic form with certificates or plain ink on paper. A signature is a signature in the eyes of the law.

Estonian Prime Minister Andrus Ansip signs an e-services agreement. (Government of Estonia)

As a quirky side effect, this foundational law also forced all decentralized government systems to become digital “by market demand.” No part of the Estonian government can turn down a citizen’s digitally signed document and demand a paper copyinstead. As citizens opt for convenience, bureaucrats see a higher inflow of digital forms and are self-motivated to invest in systems that will help them manage the process. Yet a social worker in a small village can still provide the same service with no big investment by handling the small number of digitally signed email attachments the office receives.

To prevent this system from becoming obsolete in the future, the law did not lock in the technical nuances of digital signatures. In fact, implementation has been changing over time. Initially, Estonia put a microchip in the traditional ID cards issued to every citizen for identification and domestic travel inside the European Union. The chip carries two certificates: one for legal signatures and the other for authentication when using a website or service that recognizes the government's identification system (online banking, for example). Every person over 15 is required to have an ID card, and there are now over 1.2 million active cards. That’s close to 100-percent penetration of the population.

As mobile adoption in Estonia rapidly approached the current 144 percent (the third-highest in Europe), digital signatures adapted too. Instead of carrying a smartcard reader with their computer, Estonians can now get a Mobile ID-enabled SIM card from their telecommunications operator. Without installing any additional hardware or software, they can access secure systems and affix their signatures by simply typing PIN codes on their mobile phone.

As of this writing, between ID cards and mobile phones, more than a million Estonians have authenticated 230 million times and given 140 million legally binding signatures. Besides the now-daily usage of this technology for commercial contracts and bank transactions, the most high-profile use case has been elections. Since becoming the first country in the world to allow online voting nationwide in 2005, Estonia has used the system for both parliamentary and European Parliament elections. During parliamentary elections in 2011, online voting accounted for 24 percent of all votes. (Citizens voted from 105 countries in total; I submitted my vote from California.)

To accelerate innovation, the state tendered building and securing the digital signature-certificate systems to private parties, namely a consortium led by local banks and telecoms. And that's not where the public-private partnerships end: Public and private players can access the same data-exchange system (dubbed X-Road), enabling truly integrated e-services.

Without question, it is always the Estonian citizen who owns his or her data and retains the right to control access to that data.

A prime example is the income-tax declarations Estonians “fill” out. Quote marks are appropriate here, because when an average Estonian opens the submission form once a year, it usually looks more like a review wizard: “next -> next -> next -> submit.” This is because data has been moving throughout the year. When employers report employment taxes every month, their data entries are linked to people’s tax records too. Charitable donations reported by non-profits are recorded as deductions for the giver in the same fashion. Tax deductions on mortgages are registered from data interchange with commercial banks. And so forth. Not only is the income-tax rate in the country a flat 21 percent, but Estonians get tax overpayments put back on their bank accounts (digitally transferred, of course) within two days of submitting their forms.

This liquid movement of data between systems relies on a fundamental principle to protect people’s privacy: Without question, it is always the citizen who owns his or her data and retains the right to control access to that data. For example, in the case of fully digital health records and prescriptions, people can granularly assign access rights to the general practitioners and specialized doctors of their choosing. And in scenarios where they can’t legally block the state from seeing their information, as with Estonian e-policemen using real-time terminals, they at least get a record of who accessed their data and when. If an honest citizen learns that an official has been snooping on them without a valid reason, the person can file an inquiry and get the official fired.

Moving everything online does generate security risks on not just a personal level, but also a systematic and national level. Estonia, for instance, was the target of The Cyberwar of 2007, when well-coordinated botnet attacks following some political street riots targeted government, media, and financial sites and effectively cut the country off from Internetconnections with the rest of the world for several hours. Since then, however, Estonia has become the home of NATO Cooperative Cyber Defence Centre of Excellence and Estonian President Toomas Hendrik Ilves has become one of the most vocal cybersecurity advocates on the world stage.

There is also a flip-side to the fully digitized nature of the Republic of Estonia: having the bureaucratic machine of a country humming in the cloud increases the economic cost of a potential physical assault on the state. Rather than ceasing to operating in the event of an invasion, the government could boot up a backup replica of the digital state and host it in some other friendly European territory. Government officials would be quickly re-elected, important decisions made, documents issued, business and property records maintained, births and deaths registered, and even taxes filed by those citizens who still had access to the Internet.

Estonia is a start-up country—not just by life stage, but by mindset.

The Estonian story is certainly special. The country achieved re-independence after 50 unfortunate years of Soviet occupation in 1991, having missed much of the technological progress made by the Western world in the 1960s, ’70s, and ’80s. -'80s, including checkbooks and mainframe computers. Nevertheless, the country jumped right on the mid-’90s bandwagon of TCP/IP-enabled web apps. During this social reset, Estonians also decided to throw their former communist leaders overboard and elect new leadership, often ministers in their late-20s capable of disruptive thinking.

But then again, all this was 20 years ago. Estonia has by many macroeconomic and political standards become a “boring European state,” stable and predictable, if still racing to close the gap with Old Europe from its time behind the Iron Curtain. Still, Estonia is a start-up country—not just by life stage, but by mindset.

And this is what United States, along with many other countries struggling to get the Internet, could learn from Estonia: the mindset. The willingness to get the key infrastructure right and continuously re-invent it. Before you build a health-insurance site, you need to look at what key components must exist for such a service to function optimally: signatures, transactions, legal frameworks, and the like.

Ultimately, the states that create these kinds of environments will be best positioned to attract the world’s increasingly mobile citizens. 

Government: Guards may be responsible for half of prison sex assaults | Al Jazeera America

$
0
0

Comments:"Government: Guards may be responsible for half of prison sex assaults | Al Jazeera America"

URL:http://america.aljazeera.com/articles/2014/1/26/guards-may-be-responsibleforhalfofprisonrapes.html


Inmates sit in the county jail in Williston, N.D. on July 26, 2013.Andrew Burton/Getty Images

Allegations of rape and sexual assault involving prison inmates are increasing, and nearly half of those assaults are committed against prisoners by correctional officers, according to a new report issued by the Justice Department’s Bureau of Justice Statistics.

Prison and jail administrators reported 8,763 cases of alleged sexual abuse of inmates 2011, representing an increase of 4 percent from the 8,404 that were reported in 2010 and an 11 percent jump from the 7,855 cases reported in 2009, the report said.

The report released late last week defined sexual victimization as any non-consensual sexual acts, abusive touching, threats and verbal sexual harassment. It involved surveying federal and state prisons, private prisons, local jails, military prisons and jails located in Indian country, all of which hold a collective 1.97 million inmates.

The issue of prison rape has received heightened attention since Congress passed the Prison Rape Elimination Act in 2003, a federal law calling for prisons and jails to keep detailed records of incidents of rape to be published by the government annually.

This year’s report, which crunched data from 2011, said that 10 percent of the cases reported that year were “substantiated,” meaning that they were confirmed to have happened after an investigation was launched.

That means 90 percent of the cases reported by inmates but were not substantiated. The report did not clarify whether those cases had also been investigated and then dismissed.

Some 49 percent of the incidents that year involved prison staff members committing what the report called “sexual misconduct,” or otherwise sexually harassing inmates, with the other 51 percent of cases comprising inmates assaulting fellow inmates.

Among the substantiated staff-on-inmate cases in 2011, 54 percent were committed by women, the report said. From 2009 to 2011, 84 percent of the substantiated staff-on-inmate cases involved a sexual relationship with a female staff member that “appeared to be willing,” compared to 37 percent of the cases involving male staff members during the same time period. The report noted, however, that regardless of whether the sexual relationship between an inmate and a correctional officer was consensual, it was illegal.

In the cases of sexual assault or “willing” sexual relationships with staff members, more than three-quarters of the correctional officers resigned or were fired, and just 45 percent were arrested or prosecuted.

Women prisoners appeared to experience disproportionate numbers of sexual assaults; while they represented 7 percent of state and federal prison inmates from 2009 to 2001, 22 percent of inmate-on-inmate cases involved women, as they did among 33 percent of staff-on-inmate incidents.

Two-thirds of the inmates who had been sexually assaulted by other inmates received medical examinations, and one-third were given rape kits.

The report did not indicate whether the increased incidence of alleged rapes and sexual assaults in prisons and jails might have been due to more reporting by inmates, or to heightened awareness of the problem by prison staff.

BJS statistician Allan Beck, who was a co-author of the report, told Reuters that a study from May 2013 (PDF) conducted by the same agency came up with much larger numbers, tallying some 80,000 inmate allegations of sexual abuse or assault during 2011 and 2012.

“Of course we find much higher rates of sexual victimization through inmates' self-reports than what comes through in the official records," he told Reuters.

Why “Simple” Websites Are Scientifically Superior | SoshiTech - Social Media Technology - Soshitech.com

$
0
0

Comments:"Why “Simple” Websites Are Scientifically Superior | SoshiTech - Social Media Technology - Soshitech.com"

URL:http://soshitech.com/2014/01/27/why-simple-websites-are-scientifically-superior/


Originally found on ConversionXL.com

In a study by Google in August of 2012, researchers found that not only will users judge websites as beautiful or not within 1/50th — 1/20th of a second, but also that “visually complex” websites are consistently rated as less beautiful than their simpler counterparts

Moreover, “highly prototypical” sites — those with layouts commonly associated with sites of its category — with simple visual design were rated as the most beautiful across the board.

In other words, the study found the simpler the design, the better.

But why?

In this article, we’ll examine why things like cognitive fluency and visual information processing theory can play a critical role in simplifying your web design & how a simpler design could lead to more conversions.

We’ll also look at a few case studies of sites that simplified their design, and how it improved their conversion rate, as well as give a few pointers to simplify your own design.

What is a Prototypical Website?

If I said “furniture” what image pops up in your mind? If you’re like 95% of people, you think of a chair. If I ask what color represents “boy” you think “blue”, girl = pink, car = sedan, bird = robin, etc.

Prototypicality is the basic mental image your brain creates to categorize everything you interact with. From furniture to websites, your brain has created a template for how things should look and feel.

Online, prototypicality breaks down into smaller categories. You have a different, but specific mental image for social networks, e-commerce sites, and blogs — and if any of those particular websites are missing something from your mental image, you reject the site on conscious and subconscious levels.

If I said “Online clothing store for trendy 20-somethings” you might envision something like this:

image credit

This follows the “online clothing store” prototype so closely, that it shares many attributes with the wireframe for an online clothing store that sells hip-hop clothing.

image credit

Neither lacks originality, and it’s unlikely they “stole” from each other. Instead they’re playing into what your basic expectations are of what an e-commerce site should be.

What do you Mean By Cognitive Fluency?

The basic idea behind cognitive fluency is that the brain prefers to think about things that are easy to think about.

That’s why you prefer visiting sites where you instinctively know where everything is at, and you know what actions you’re supposed to take.

“Fluency guides our thinking in situations where we have no idea that it is at work, and it affects us in any situation where we weigh information.” — Uxmatters.com

Cognitive fluency is an stems from another area of behavior known as The Mere Exposure Effect, which basically states that the more times you’re exposed to a stimulus, the more you prefer it.

image source

Again, the rules are the same online.

It’s “familiar” for blogs to have opt-ins on the right sidebar, or e-commerce sites to feature a large hi-resolution image with an attention grabbing headline & the company logo on the top left hand side of the screen.

If your visitors are conditioned to certain characteristics being the standard for a particular category of site, deviating from that could subconsciously put you in the “less beautiful” category.

Here are a handful of e-commerce sites. See if you notice any similarities.

Warning: Whatever you do, for the love of GOD, don’t take what I’m saying as “do what everyone else is doing.” If you’re not careful, you could really hurt yourself that way.

It’s important to know what design choices are prototypical for a site in your category, but it’s more important to find evidence that supports those design choices resulting in some sort of lift.

A lot of designers make bad choices. Without doing the research, you could make them too. For example, many e-commerce sites use automatic image sliders to display products, but study after study shows that automatic sliders tank conversions.

What Happens When You Meet Basic Expectations? — A Case Study

In the three images above, everything you’d expect from an ecommerce site is exactly where it’s supposed to be. Even if you’ve never been to the site, there’s inherent “credibility” to the design.

With a high level of fluency, a site will feel familiar enough that visitors don’t need spend mental effort scrutinizing and can instead focus on why they’re on your site in the first place.

When the experience is dis-fluent however, you feel it immediately. Take online tie retailer, Skinnyties.com, who didn’t really look like an e-commerce site until their redesign in October 2012.

Before:

After:

A few key changes that lead to huge results:

  • Follows prototypical e-commerce layout themes
  • Much more “open” with whitespace.
  • Images feature a single product with high-resolution pictures & contrasting colors.

Check out the full case study on this particular redesign, as it shows what is truly possible when updating a site to “fit in” with current prototypical standards.

These are the results of the redesign are staggering for only 2.5 weeks after the launch:

The redesign itself, while pretty, isn’t doing anything groundbreaking. It plays exactly into the expectations of what a modern online clothing retailer should be. It’s “open”, responsive, and has a consistent design language across all of the product pages.

But when contrasted with the old site, it’s very clear that the lack of these common elements were preventing buyers from making purchases on the site.

What Visual Information Processing Has To Do With Site Complexity

In this joint study by Harvard, University of Maryland, and University of Colorado, researchers found strong mathematical correlations for “aesthetically pleasing” between different demographics — For example, participants with PhD’s did not like high colorful websites — but there were no guidelines that emerged for universal appeal.

The only thing that was universal was that the more visually complex a website was, the lower it’s visual appeal.

(Sidebar: if you wish to take the test, you can do it here)

Why Simple is Scientifically Easier To Process

The reason less “visually complex” websites are considered more beautiful is partly because low complexity websites don’t require the eyes and brain to physically work as hard to decode, store and process the information.

Basically, your retina converts visual information from the real world into electrical impulses. Those impulses are then routed through the appropriate photoreceptor cells to transmit the color and light information to the brain.

The more color and light variations on the page (visual complexity) the more work the eye has to do to send information to the brain.

“…the eye receives visual information and codes information into electric neural activity which is fed back to the brain where it is “stored” and “coded”. This information can be used by other parts of the brain relating to mental activities such as memory, perception and attention.” — Simplypsychology.org

Every Element Communicates Subtle Information

image source

This is why it’s important when designing a website to remember every element –typeography, logo, and color selection — communicates subtle information about the brand.

When these elements don’t do their job, the webmaster often compensates by adding unnecessary copy and/or images, thus adding to the visual complexity of the website, and detracting from the overall aesthetic.

Optimizing a page for visual information processing — specifically simplifying information’s journey from eye to brain — is about communicating as much as you can in as few elements as possible.

While that’s an article all on it’s own, consider MailChimp’s logo redesign as food for thought.

When they decided make the brand grow up, they didn’t add the usual “we’ve been doing email since 2001, 3 million people trust us, here’s why we’re awesome, blah blah blah”

Instead, they tightened up the writing, simplified the website — the top headline simply reads “Send Better Email” — and added an even simpler explainer animation of the core product.

Even though this was part of a bigger growth strategy, the results are still impressive, over a million new users have been added since June, when the new logo was first debuted.

”Working Memory” & The Holy Grail of Conversion

What all this simplicity is leading to is what happens once visual information finds it’s way to the brain.

According to the famous research of psychologist George A Miller of Princeton, the average adult brain is able to store between 5-9 “chunks” of information within in the short term, working memory.

Working memory is the part of your brain that temporarily stores and processes information in the course of a few seconds. It’s what allows you to focus attention, resist distractions, and most importantly, guides your decision making.

image source

Everything we’ve been talking about up to this point is to reduce the amount of “noise” that makes it’s way into the working memory.

On a “low complexity, highly prototypical website”, the 5-9 “chunks” the working memory tries to process are things like guarantees, product descriptions, prices or offers. When the working memory can stay focused on fixing the problem, it will try and solve the problem as quickly as possible.

Deviation Causes Disengagement

When you deviate from a person’s expectations — the price was higher than expected, the color scheme and symmetry were off, the site didn’t load fast enough, the photos weren’t high enough resolution — the working memory processes those disfluent “chunks” instead of what matters.

That’s because the working memory calls the long term memory to use what it already knows to perform the task. When the long term memory can’t aid in processing the information, flow is broken& the working memory disengages and moves on.

That’s why it’s vital you understand your visitor’s level of exposure — not just for sites in your category, but to websites in general -If you want to “hack” their working memory with design.

The blogs they read, the sites they shop on, their browser, age, gender & physical location, all hint at how will impact their level of familiarity on first impression.

Conclusion

If the visitor can’t rely on their previous experience, they’re not thinking about how innovative your site is. They’re just left wondering why things aren’t where it’s “supposed to be.” Not the best frame of mind if you want them to buy stuff.

Bonus: 7 Things To Do When Planning A Simpler Site.

1. Research your audience and the sites they visit the most. Look for case studies on design changes from said sites & how those resulted in improvement is key areas.

2. Create a mashup of all those “working” components for your own site.

3. Obey the rules of cognitive fluency when you lay out your design. Put things where your visitors have grown accustomed to finding them.

4. Rely on your own colors, logo, and typeface to communicate clearly and subtly. Don’t add copy and/or images unless it communicates something your visitor actually cares about.

5. Keep it as simple as possible — one large image vs a bunch of little ones, one column, instead of three — utilize as much white space as possible.

6. Double check to make sure your site fits the public expectation in pricing, aesthetics, speed, etc.

7. Remember that “prototypical” doesn’t mean that every aspect of your site should fit that mold.

Don’t think of your site as some unique snowflake piece of art. Instead make it a composite of all the best stuff.

Your visitors will love you for it.

Featured image credit

For more great research based, action oriented content check out ConversionXL.com


Why are US corporate profits so high? Because wages are so low | MacroScope

$
0
0

Comments:"Why are US corporate profits so high? Because wages are so low | MacroScope"

URL:http://blogs.reuters.com/macroscope/2014/01/24/why-are-us-corporate-profits-so-high-because-wages-are-so-low/


U.S. businesses have never had it so good.

Corporate cash piles have never been bigger, either in dollar terms or as a share of the economy.

The labor market, meanwhile, is still millions of jobs short of where it was before the global financial crisis first erupted over six years ago.

Coincidence?

Not in the slightest, according to Jan Hatzius, chief U.S. economist at Goldman Sachs:

“The strength (in profits) is directly related to the weakness in hourly wages, which are still growing at just a 2% nominal pace. The weakness of wages and the resulting strength of profits are telling signs that the US labor market is still far from full employment.

Companies have not been unable to raise prices much because of the economic recovery has been fragile. But they’ve still managed to boost profits beyond anything ever seen before because they’ve got away with employing as few workers as possible at as low a rate as possible.

Compare and contrast these two charts:

 

 

 

 

So, corporate profits are their highest ever and wage growth is near its lowest in half a century. But don’t expect the transfer of that cash from businesses to workers to start any time soon, says Hatzius:

“The bottom line is that the favorable environment for corporate profits should persist for some time yet, and the case for an acceleration in the near term is strong. Hourly labor costs would need to grow more than 4% to eat into margins on a systematic basis. Such a strong acceleration still seems to be at least a couple of years off.”  

Hypertext Transfer Protocol version 2

$
0
0

Comments:"Hypertext Transfer Protocol version 2"

URL:http://http2.github.io/http2-spec/#GTFO


This specification describes an optimized expression of the syntax of the Hypertext Transfer Protocol (HTTP). HTTP/2 enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent messages on the same connection. It also introduces unsolicited push of representations from servers to clients.

This document is an alternative to, but does not obsolete, the HTTP/1.1 message syntax. HTTP's existing semantics remain unchanged.

The Hypertext Transfer Protocol (HTTP) is a wildly successful protocol. However, the HTTP/1.1 message format ([HTTP-p1], Section 3) is optimized for implementation simplicity and accessibility, not application performance. As such it has several characteristics that have a negative overall effect on application performance.

In particular, HTTP/1.0 only allows one request to be outstanding at a time on a given connection. HTTP/1.1 pipelining only partially addressed request concurrency and suffers from head-of-line blocking. Therefore, clients that need to make many requests typically use multiple connections to a server in order to reduce latency.

Furthermore, HTTP/1.1 header fields are often repetitive and verbose, which, in addition to generating more or larger network packets, can cause the small initial TCP congestion window to quickly fill. This can result in excessive latency when multiple requests are made on a single new TCP connection.

This document addresses these issues by defining an optimized mapping of HTTP's semantics to an underlying connection. Specifically, it allows interleaving of request and response messages on the same connection and uses an efficient coding for HTTP header fields. It also allows prioritization of requests, letting more important requests complete more quickly, further improving performance.

The resulting protocol is designed to be more friendly to the network, because fewer TCP connections can be used, in comparison to HTTP/1.x. This means less competition with other flows, and longer-lived connections, which in turn leads to better utilization of available network capacity.

Finally, this encapsulation also enables more scalable processing of messages through use of binary message framing.

1.1 Document Organization

The HTTP/2 specification is split into four parts:

  • Starting HTTP/2 (Section 3) covers how a HTTP/2 connection is initiated.
  • The framing (Section 4) and streams (Section 5) layers describe the way a HTTP/2 frames are structured and formed into multiplexed streams.
  • Frame (Section 6) and error (Section 7) definitions include details of the frame and error types used in HTTP/2.
  • HTTP mappings (Section 8) and additional requirements (Section 9) describe how HTTP semantics are expressed using the mechanisms defined.

While some of the frame and stream layer concepts are isolated from HTTP, the intent is not to define a completely generic framing layer. The framing and streams layers are tailored to the needs of the HTTP protocol and server push.

1.2 Conventions and Terminology

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].

All numeric values are in network byte order. Values are unsigned unless otherwise indicated. Literal values are provided in decimal or hexadecimal as appropriate. Hexadecimal literals are prefixed with 0x to distinguish them from decimal literals.

The following terms are used:

client: The endpoint initiating the HTTP connection. connection: A transport-level connection between two endpoints. connection error: An error that affects the entire HTTP/2 connection. endpoint: Either the client or server of the connection. frame: The smallest unit of communication within an HTTP/2 connection, consisting of a header and a variable-length sequence of bytes structured according to the frame type. peer: An endpoint. When discussing a particular endpoint, "peer" refers to the endpoint that is remote to the primary subject of discussion. receiver: An endpoint that is receiving frames. sender: An endpoint that is transmitting frames. server: The endpoint which did not initiate the HTTP connection. stream: A bi-directional flow of frames across a virtual channel within the HTTP/2 connection. stream error: An error on the individual HTTP/2 stream.

HTTP/2 uses the same "http" and "https" URI schemes used by HTTP/1.1. HTTP/2 shares the same default port numbers: 80 for "http" URIs and 443 for "https" URIs. As a result, implementations processing requests for target resource URIs like http://example.org/foo or https://example.com/bar are required to first discover whether the upstream server (the immediate peer to which the client wishes to establish a connection) supports HTTP/2.

The means by which support for HTTP/2 is determined is different for "http" and "https" URIs. Discovery for "http" URIs is described in Section 3.2. Discovery for "https" URIs is described in Section 3.3.

The protocol defined in this document is identified using the string "h2". This identification is used in the HTTP/1.1 Upgrade header field, in the TLS application layer protocol negotiation extension [TLSALPN] field, and other places where protocol identification is required.

Negotiating "h2" implies the use of the transport, security, framing and message semantics described in this document.

Only implementations of the final, published RFC can identify themselves as "h2". Until such an RFC exists, implementations MUST NOT identify themselves using "h2".

Examples and text throughout the rest of this document use "h2" as a matter of editorial convenience only. Implementations of draft versions MUST NOT identify using this string.

Implementations of draft versions of the protocol MUST add the string "-" and the corresponding draft number to the identifier. For example, draft-ietf-httpbis-http2-09 is identified using the string "h2-09".

Non-compatible experiments that are based on these draft versions MUST append the string "-" and a experiment name to the identifier. For example, an experimental implementation of packet mood-based encoding based on draft-ietf-httpbis-http2-09 might identify itself as "h2-09-emo". Note that any label MUST conform to the "token" syntax defined in Section 3.2.6 of [HTTP-p1]. Experimenters are encouraged to coordinate their experiments on the ietf-http-wg@w3.org mailing list.

A client that makes a request to an "http" URI without prior knowledge about support for HTTP/2 uses the HTTP Upgrade mechanism (Section 6.7 of [HTTP-p1]). The client makes an HTTP/1.1 request that includes an Upgrade header field identifying HTTP/2 with the h2 token. The HTTP/1.1 request MUST include exactly one HTTP2-Settings (Section 3.2.1) header field.

For example:

GET /default.htm HTTP/1.1
Host: server.example.com
Connection: Upgrade, HTTP2-Settings
Upgrade: h2
HTTP2-Settings: <base64url encoding of HTTP/2 SETTINGS payload>

Requests that contain an entity body MUST be sent in their entirety before the client can send HTTP/2 frames. This means that a large request entity can block the use of the connection until it is completely sent.

If concurrency of an initial request with subsequent requests is important, a small request can be used to perform the upgrade to HTTP/2, at the cost of an additional round-trip.

A server that does not support HTTP/2 can respond to the request as though the Upgrade header field were absent:

HTTP/1.1 200 OK
Content-Length: 243
Content-Type: text/html
...

A server that supports HTTP/2 can accept the upgrade with a 101 (Switching Protocols) response. After the empty line that terminates the 101 response, the server can begin sending HTTP/2 frames. These frames MUST include a response to the request that initiated the Upgrade.

HTTP/1.1 101 Switching Protocols
Connection: Upgrade
Upgrade: h2
[ HTTP/2 connection ...

The first HTTP/2 frame sent by the server is a SETTINGS frame (Section 6.5). Upon receiving the 101 response, the client sends a connection header (Section 3.5), which includes a SETTINGS frame.

The HTTP/1.1 request that is sent prior to upgrade is assigned stream identifier 1 and is assigned the highest possible priority. Stream 1 is implicitly half closed from the client toward the server, since the request is completed as an HTTP/1.1 request. After commencing the HTTP/2 connection, stream 1 is used for the response.

A client that makes a request to an "https" URI without prior knowledge about support for HTTP/2 uses TLS [TLS12] with the application layer protocol negotiation extension [TLSALPN].

Once TLS negotiation is complete, both the client and the server send a connection header (Section 3.5).

A client can learn that a particular server supports HTTP/2 by other means. For example, [AltSvc] describes a mechanism for advertising this capability in an HTTP header field. A client MAY immediately send HTTP/2 frames to a server that is known to support HTTP/2, after the connection header (Section 3.5). A server can identify such a connection by the use of the "PRI" method in the connection header. This only affects the resolution of "http" URIs; servers supporting HTTP/2 are required to support protocol negotiation in TLS [TLSALPN] for "https" URIs.

Prior support for HTTP/2 is not a strong signal that a given server will support HTTP/2 for future connections. It is possible for server configurations to change or for configurations to differ between instances in clustered server. Interception proxies (a.k.a. "transparent" proxies) are another source of variability.

This specification defines a number of frame types, each identified by a unique 8-bit type code. Each frame type serves a distinct purpose either in the establishment and management of the connection as a whole, or of individual streams.

The transmission of specific frame types can alter the state of a connection. If endpoints fail to maintain a synchronized view of the connection state, successful communication within the connection will no longer be possible. Therefore, it is important that endpoints have a shared comprehension of how the state is affected by the use any given frame. Accordingly, while it is expected that new frame types will be introduced by extensions to this protocol, only frames defined by this document are permitted to alter the connection state.

DATA frames (type=0x0) convey arbitrary, variable-length sequences of octets associated with a stream. One or more DATA frames are used, for instance, to carry HTTP request or response payloads.

DATA frames MAY also contain arbitrary padding. Padding can be added to DATA frames to hide the size of messages.

 0 1 2 3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 | [Pad High(8)] | [Pad Low (8)] | Data (*) .
 +-+-------------------------------------------------------------+
 . Data (*) ...
 +---------------------------------------------------------------+
 | Padding (*) ...
 +---------------------------------------------------------------+

Figure 3: DATA Frame Payload

The DATA frame contains the following fields:

Pad High: An 8-bit field containing an amount of padding in units of 256 octets. This field is optional and is only present if the PAD_HIGH flag is set. This field, in combination with Pad Low, determines how much padding there is on a frame. Pad Low: An 8-bit field containing an amount of padding in units of single octets. This field is optional and is only present if the PAD_LOW flag is set. This field, in combination with Pad High, determines how much padding there is on a frame. Data: Application data. The amount of data is the remainder of the frame payload after subtracting the length of the other fields that are present. Padding: Padding octets that contain no application semantic value. Padding octets MUST be set to zero when sending and ignored when receiving.

The DATA frame defines the following flags:

END_STREAM (0x1): Bit 1 being set indicates that this frame is the last that the endpoint will send for the identified stream. Setting this flag causes the stream to enter one of the "half closed" states or the "closed" state (Section 5.1). RESERVED (0x2): Bit 2 is reserved for future use. PAD_HIGH (0x4): Bit 3 being set indicates that the Pad High field is present. This bit MUST NOT be set unless the PAD_LOW flag is also set. PAD_LOW (0x8): Bit 4 being set indicates that the Pad Low field is present.

DATA frames MUST be associated with a stream. If a DATA frame is received whose stream identifier field is 0x0, the recipient MUST respond with a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

DATA frames are subject to flow control and can only be sent when a stream is in the "open" or "half closed (remote)" states. Padding is not excluded from flow control. If a DATA frame is received whose stream is not in "open" or "half closed (local)" state, the recipient MUST respond with a stream error (Section 5.4.2) of type STREAM_CLOSED.

The total number of padding octets is determined by multiplying the value of the Pad High field by 256 and adding the value of the Pad Low field. Both Pad High and Pad Low fields assume a value of zero if absent. If the length of the padding is greater than the length of the remainder of the frame payload, the recipient MUST treat this as a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

Note: A frame can be increased in size by one octet by including a Pad Low field with a value of zero.

Use of padding is a security feature; as such, its use demands some care, see Section 10.6.

The PRIORITY frame (type=0x2) specifies the sender-advised priority of a stream. It can be sent at any time for an existing stream. This enables reprioritisation of existing streams.

 0 1 2 3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |X| Priority (31) |
 +-+-------------------------------------------------------------+

Figure 5: PRIORITY Frame Payload

The payload of a PRIORITY frame contains a single reserved bit and a 31-bit priority.

The PRIORITY frame does not define any flags.

The PRIORITY frame is associated with an existing stream. If a PRIORITY frame is received with a stream identifier of 0x0, the recipient MUST respond with a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

The PRIORITY frame can be sent on a stream in any of the "reserved (remote)", "open", "half-closed (local)", or "half closed (remote)" states, though it cannot be sent between consecutive frames that comprise a single header block (Section 4.3). Note that this frame could arrive after processing or frame sending has completed, which would cause it to have no effect. For a stream that is in the "half closed (remote)" state, this frame can only affect processing of the stream and not frame transmission.

The RST_STREAM frame (type=0x3) allows for abnormal termination of a stream. When sent by the initiator of a stream, it indicates that they wish to cancel the stream or that an error condition has occurred. When sent by the receiver of a stream, it indicates that either the receiver is rejecting the stream, requesting that the stream be cancelled or that an error condition has occurred.

 0 1 2 3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 | Error Code (32) |
 +---------------------------------------------------------------+

Figure 6: RST_STREAM Frame Payload

The RST_STREAM frame contains a single unsigned, 32-bit integer identifying the error code (Section 7). The error code indicates why the stream is being terminated.

The RST_STREAM frame does not define any flags.

The RST_STREAM frame fully terminates the referenced stream and causes it to enter the closed state. After receiving a RST_STREAM on a stream, the receiver MUST NOT send additional frames for that stream. However, after sending the RST_STREAM, the sending endpoint MUST be prepared to receive and process additional frames sent on the stream that might have been sent by the peer prior to the arrival of the RST_STREAM.

RST_STREAM frames MUST be associated with a stream. If a RST_STREAM frame is received with a stream identifier of 0x0, the recipient MUST treat this as a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

RST_STREAM frames MUST NOT be sent for a stream in the "idle" state. If a RST_STREAM frame identifying an idle stream is received, the recipient MUST treat this as a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

The SETTINGS frame (type=0x4) conveys configuration parameters that affect how endpoints communicate. The parameters are either constraints on peer behavior or preferences.

Settings are not negotiated. Settings describe characteristics of the sending peer, which are used by the receiving peer. Different values for the same setting can be advertised by each peer. For example, a client might set a high initial flow control window, whereas a server might set a lower value to conserve resources.

SETTINGS frames MUST be sent at the start of a connection, and MAY be sent at any other time by either endpoint over the lifetime of the connection.

Implementations MUST support all of the settings defined by this specification and MAY support additional settings defined by extensions. Unsupported or unrecognized settings MUST be ignored. New settings MUST NOT be defined or implemented in a way that requires endpoints to understand them in order to communicate successfully.

Each setting in a SETTINGS frame replaces the existing value for that setting. Settings are processed in the order in which they appear, and a receiver of a SETTINGS frame does not need to maintain any state other than the current value of settings. Therefore, the value of a setting is the last value that is seen by a receiver. This permits the inclusion of the same settings multiple times in the same SETTINGS frame, though doing so does nothing other than waste connection capacity.

The SETTINGS frame defines the following flag:

ACK (0x1): Bit 1 being set indicates that this frame acknowledges receipt and application of the peer's SETTINGS frame. When this bit is set, the payload of the SETTINGS frame MUST be empty. Receipt of a SETTINGS frame with the ACK flag set and a length field value other than 0 MUST be treated as a connection error (Section 5.4.1) of type FRAME_SIZE_ERROR. For more info, see Settings Synchronization (Section 6.5.3).

SETTINGS frames always apply to a connection, never a single stream. The stream identifier for a settings frame MUST be zero. If an endpoint receives a SETTINGS frame whose stream identifier field is anything other than 0x0, the endpoint MUST respond with a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

The SETTINGS frame affects connection state. A badly formed or incomplete SETTINGS frame MUST be treated as a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

The payload of a SETTINGS frame consists of zero or more settings. Each setting consists of an unsigned 8-bit setting identifier, and an unsigned 32-bit value.

 0 1 2 3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |Identifier (8) | Value (32) ...
 +---------------+-----------------------------------------------+
 ...Value |
 +---------------+

Figure 7: Setting Format

The following settings are defined:

Allows the sender to inform the remote endpoint of the size of the header compression table used to decode header blocks. The space available for encoding cannot be changed; it is determined by the setting sent by the peer that receives the header blocks. The initial value is 4,096 bytes. SETTINGS_ENABLE_PUSH (2): This setting can be use to disable server push (Section 8.2). An endpoint MUST NOT send a PUSH_PROMISE frame if it receives this setting set to a value of 0. An endpoint that has set this setting to 0 and had it acknowledged MUST treat the receipt of a PUSH_PROMISE frame as a connection error (Section 5.4.1) of type PROTOCOL_ERROR. The initial value is 1, which indicates that push is permitted. SETTINGS_MAX_CONCURRENT_STREAMS (3): Indicates the maximum number of concurrent streams that the sender will allow. This limit is directional: it applies to the number of streams that the sender permits the receiver to create. Initially there is no limit to this value. It is recommended that this value be no smaller than 100, so as to not unnecessarily limit parallelism. A value of 0 for SETTINGS_MAX_CONCURRENT_STREAMS SHOULD NOT be treated as special by endpoints. A zero value does prevent the creation of new streams, however this can also happen for any limit that is exhausted with active streams. Servers SHOULD only set a zero value for short durations; if a server does not wish to accept requests, closing the connection could be preferable. SETTINGS_INITIAL_WINDOW_SIZE (4): Indicates the sender's initial window size (in bytes) for stream level flow control. This settings affects the window size of all streams, including existing streams, see Section 6.9.2. Values above the maximum flow control window size of 231 - 1 MUST be treated as a connection error (Section 5.4.1) of type FLOW_CONTROL_ERROR.

An endpoint that receives a SETTINGS frame with any other setting identifier MUST treat this as a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

Most values in SETTINGS benefit from or require an understanding of when the peer has received and applied the changed setting values. In order to provide such synchronization timepoints, the recipient of a SETTINGS frame in which the ACK flag is not set MUST apply the updated settings as soon as possible upon receipt.

The values in the SETTINGS frame MUST be applied in the order they appear, with no other frame processing between values. Once all values have been applied, the recipient MUST immediately emit a SETTINGS frame with the ACK flag set. The sender of altered settings applies changes upon receiving a SETTINGS frame with the ACK flag set.

If the sender of a SETTINGS frame does not receive an acknowledgement within a reasonable amount of time, it MAY issue a connection error (Section 5.4.1) of type SETTINGS_TIMEOUT.

The PUSH_PROMISE frame (type=0x5) is used to notify the peer endpoint in advance of streams the sender intends to initiate. The PUSH_PROMISE frame includes the unsigned 31-bit identifier of the stream the endpoint plans to create along with a set of headers that provide additional context for the stream. Section 8.2 contains a thorough description of the use of PUSH_PROMISE frames.

PUSH_PROMISE MUST NOT be sent if the SETTINGS_ENABLE_PUSH setting of the peer endpoint is set to 0.

 0 1 2 3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |X| Promised-Stream-ID (31) |
 +-+-------------------------------------------------------------+
 | Header Block Fragment (*) ...
 +---------------------------------------------------------------+

Figure 8: PUSH_PROMISE Payload Format

The payload of a PUSH_PROMISE includes a "Promised-Stream-ID". This unsigned 31-bit integer identifies the stream the endpoint intends to start sending frames for. The promised stream identifier MUST be a valid choice for the next stream sent by the sender (see new stream identifier (Section 5.1.1)).

Following the "Promised-Stream-ID" is a header block fragment (Section 4.3).

PUSH_PROMISE frames MUST be associated with an existing, peer-initiated stream. If the stream identifier field specifies the value 0x0, a recipient MUST respond with a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

The PUSH_PROMISE frame defines the following flags:

END_PUSH_PROMISE (0x4): Bit 3 being set indicates that this frame contains an entire header block (Section 4.3) and is not followed by any CONTINUATION frames. A PUSH_PROMISE frame without the END_PUSH_PROMISE flag set MUST be followed by a CONTINUATION frame for the same stream. A receiver MUST treat the receipt of any other type of frame or a frame on a different stream as a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

Promised streams are not required to be used in order promised. The PUSH_PROMISE only reserves stream identifiers for later use.

Recipients of PUSH_PROMISE frames can choose to reject promised streams by returning a RST_STREAM referencing the promised stream identifier back to the sender of the PUSH_PROMISE.

The PUSH_PROMISE frame modifies the connection state as defined in Section 4.3.

A PUSH_PROMISE frame modifies the connection state in two ways. The inclusion of a header block (Section 4.3) potentially modifies the compression state. PUSH_PROMISE also reserves a stream for later use, causing the promised stream to enter the "reserved" state. A sender MUST NOT send a PUSH_PROMISE on a stream unless that stream is either "open" or "half closed (remote)"; the sender MUST ensure that the promised stream is a valid choice for a new stream identifier (Section 5.1.1) (that is, the promised stream MUST be in the "idle" state).

Since PUSH_PROMISE reserves a stream, ignoring a PUSH_PROMISE frame causes the stream state to become indeterminate. A receiver MUST treat the receipt of a PUSH_PROMISE on a stream that is neither "open" nor "half-closed (local)" as a connection error (Section 5.4.1) of type PROTOCOL_ERROR. Similarly, a receiver MUST treat the receipt of a PUSH_PROMISE that promises an illegal stream identifier (Section 5.1.1) (that is, an identifier for a stream that is not currently in the "idle" state) as a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

The PING frame (type=0x6) is a mechanism for measuring a minimal round-trip time from the sender, as well as determining whether an idle connection is still functional. PING frames can be sent from any endpoint.

 0 1 2 3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 | |
 | Opaque Data (64) |
 | |
 +---------------------------------------------------------------+

Figure 9: PING Payload Format

In addition to the frame header, PING frames MUST contain 8 octets of data in the payload. A sender can include any value it chooses and use those bytes in any fashion.

Receivers of a PING frame that does not include a ACK flag MUST send a PING frame with the ACK flag set in response, with an identical payload. PING responses SHOULD be given higher priority than any other frame.

The PING frame defines the following flags:

ACK (0x1): Bit 1 being set indicates that this PING frame is a PING response. An endpoint MUST set this flag in PING responses. An endpoint MUST NOT respond to PING frames containing this flag.

PING frames are not associated with any individual stream. If a PING frame is received with a stream identifier field value other than 0x0, the recipient MUST respond with a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

Receipt of a PING frame with a length field value other than 8 MUST be treated as a connection error (Section 5.4.1) of type FRAME_SIZE_ERROR.

Senders indicate a General Termination of Future Operations by sending a GTFO frame (type=0x7), which informs the remote peer to stop creating streams on this connection. GTFO can be sent by either client or server. Once sent, the sender will ignore frames sent on new streams for the remainder of the connection. Receivers of a GTFO frame MUST NOT open additional streams on the connection, although a new connection can be established for new streams. The purpose of this frame is to allow an endpoint to gracefully stop accepting new streams (perhaps for a reboot or maintenance), while still finishing processing of previously established streams.

There is an inherent race condition between an endpoint starting new streams and the remote sending a GTFO frame. To deal with this case, the GTFO contains the stream identifier of the last stream which was processed on the sending endpoint in this connection. If the receiver of the GTFO used streams that are newer than the indicated stream identifier, they were not processed by the sender and the receiver may treat the streams as though they had never been created at all (hence the receiver may want to re-create the streams later on a new connection).

Endpoints SHOULD always send a GTFO frame before closing a connection so that the remote can know whether a stream has been partially processed or not. For example, if an HTTP client sends a POST at the same time that a server closes a connection, the client cannot know if the server started to process that POST request if the server does not send a GTFO frame to indicate where it stopped working. An endpoint might choose to close a connection without sending GTFO for misbehaving peers.

After sending a GTFO frame, the sender can discard frames for new streams. However, any frames that alter connection state cannot be completely ignored. For instance, HEADERS, PUSH_PROMISE and CONTINUATION frames MUST be minimally processed to ensure a consistent compression state (see Section 4.3); similarly DATA frames MUST be counted toward the connection flow control window.

 0 1 2 3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |X| Last-Stream-ID (31) |
 +-+-------------------------------------------------------------+
 | Error Code (32) |
 +---------------------------------------------------------------+
 | Additional Debug Data (*) |
 +---------------------------------------------------------------+

Figure 10: GTFO Payload Format

The GTFO frame does not define any flags.

The GTFO frame applies to the connection, not a specific stream. An endpoint MUST treat a GTFO frame with a stream identifier other than 0x0 as a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

The last stream identifier in the GTFO frame contains the highest numbered stream identifier for which the sender of the GTFO frame has received frames on and might have taken some action on. All streams up to and including the identified stream might have been processed in some way. The last stream identifier is set to 0 if no streams were processed.

  • Note: In this case, "processed" means that some data from the stream was passed to some higher layer of software that might have taken some action as a result.

If a connection terminates without a GTFO frame, this value is effectively the highest stream identifier.

On streams with lower or equal numbered identifiers that were not closed completely prior to the connection being closed, re-attempting requests, transactions, or any protocol activity is not possible (with the exception of idempotent actions like HTTP GET, PUT, or DELETE). Any protocol activity that uses higher numbered streams can be safely retried using a new connection.

Activity on streams numbered lower or equal to the last stream identifier might still complete successfully. The sender of a GTFO frame might gracefully shut down a connection by sending a GTFO frame, maintaining the connection in an open state until all in-progress streams complete.

The last stream ID MUST be 0 if no streams were acted upon.

The GTFO frame also contains a 32-bit error code (Section 7) that contains the reason for closing the connection.

Endpoints MAY append opaque data to the payload of any GTFO frame. Additional debug data is intended for diagnostic purposes only and carries no semantic value. Debug information could contain security- or privacy-sensitive data. Logged or otherwise persistently stored debug data MUST have adequate safeguards to prevent unauthorized access.

The WINDOW_UPDATE frame (type=0x8) is used to implement flow control.

Flow control operates at two levels: on each individual stream and on the entire connection.

Both types of flow control are hop by hop; that is, only between the two endpoints. Intermediaries do not forward WINDOW_UPDATE frames between dependent connections. However, throttling of data transfer by any receiver can indirectly cause the propagation of flow control information toward the original sender.

Flow control only applies to frames that are identified as being subject to flow control. Of the frame types defined in this document, this includes only DATA frame. Frames that are exempt from flow control MUST be accepted and processed, unless the receiver is unable to assign resources to handling the frame. A receiver MAY respond with a stream error (Section 5.4.2) or connection error (Section 5.4.1) of type FLOW_CONTROL_ERROR if it is unable accept a frame.

 0 1 2 3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |X| Window Size Increment (31) |
 +-+-------------------------------------------------------------+

Figure 11: WINDOW_UPDATE Payload Format

The payload of a WINDOW_UPDATE frame is one reserved bit, plus an unsigned 31-bit integer indicating the number of bytes that the sender can transmit in addition to the existing flow control window. The legal range for the increment to the flow control window is 1 to 231 - 1 (0x7fffffff) bytes.

The WINDOW_UPDATE frame does not define any flags.

The WINDOW_UPDATE frame can be specific to a stream or to the entire connection. In the former case, the frame's stream identifier indicates the affected stream; in the latter, the value "0" indicates that the entire connection is the subject of the frame.

WINDOW_UPDATE can be sent by a peer that has sent a frame bearing the END_STREAM flag. This means that a receiver could receive a WINDOW_UPDATE frame on a "half closed (remote)" or "closed" stream. A receiver MUST NOT treat this as an error, see Section 5.1.

A receiver that receives a flow controlled frame MUST always account for its contribution against the connection flow control window, unless the receiver treats this as a connection error (Section 5.4.1). This is necessary even if the frame is in error. Since the sender counts the frame toward the flow control window, if the receiver does not, the flow control window at sender and receiver can become different.

6.9.1 The Flow Control Window

Flow control in HTTP/2 is implemented using a window kept by each sender on every stream. The flow control window is a simple integer value that indicates how many bytes of data the sender is permitted to transmit; as such, its size is a measure of the buffering capability of the receiver.

Two flow control windows are applicable: the stream flow control window and the connection flow control window. The sender MUST NOT send a flow controlled frame with a length that exceeds the space available in either of the flow control windows advertised by the receiver. Frames with zero length with the END_STREAM flag set (for example, an empty data frame) MAY be sent if there is no available space in either flow control window.

For flow control calculations, the 8 byte frame header is not counted.

After sending a flow controlled frame, the sender reduces the space available in both windows by the length of the transmitted frame.

The receiver of a frame sends a WINDOW_UPDATE frame as it consumes data and frees up space in flow control windows. Separate WINDOW_UPDATE frames are sent for the stream and connection level flow control windows.

A sender that receives a WINDOW_UPDATE frame updates the corresponding window by the amount specified in the frame.

A sender MUST NOT allow a flow control window to exceed 231 - 1 bytes. If a sender receives a WINDOW_UPDATE that causes a flow control window to exceed this maximum it MUST terminate either the stream or the connection, as appropriate. For streams, the sender sends a RST_STREAM with the error code of FLOW_CONTROL_ERROR code; for the connection, a GTFO frame with a FLOW_CONTROL_ERROR code.

Flow controlled frames from the sender and WINDOW_UPDATE frames from the receiver are completely asynchronous with respect to each other. This property allows a receiver to aggressively update the window size kept by the sender to prevent streams from stalling.

When a HTTP/2 connection is first established, new streams are created with an initial flow control window size of 65,535 bytes. The connection flow control window is 65,535 bytes. Both endpoints can adjust the initial window size for new streams by including a value for SETTINGS_INITIAL_WINDOW_SIZE in the SETTINGS frame that forms part of the connection header.

Prior to receiving a SETTINGS frame that sets a value for SETTINGS_INITIAL_WINDOW_SIZE, an endpoint can only use the default initial window size when sending flow controlled frames. Similarly, the connection flow control window is set to the default initial window size until a WINDOW_UPDATE frame is received.

A SETTINGS frame can alter the initial flow control window size for all current streams. When the value of SETTINGS_INITIAL_WINDOW_SIZE changes, a receiver MUST adjust the size of all stream flow control windows that it maintains by the difference between the new value and the old value. A SETTINGS frame cannot alter the connection flow control window.

A change to SETTINGS_INITIAL_WINDOW_SIZE could cause the available space in a flow control window to become negative. A sender MUST track the negative flow control window, and MUST NOT send new flow controlled frames until it receives WINDOW_UPDATE frames that cause the flow control window to become positive.

For example, if the client sends 60KB immediately on connection establishment, and the server sets the initial window size to be 16KB, the client will recalculate the available flow control window to be -44KB on receipt of the SETTINGS frame. The client retains a negative flow control window until WINDOW_UPDATE frames restore the window to being positive, after which the client can resume sending.

6.9.3 Reducing the Stream Window Size

A receiver that wishes to use a smaller flow control window than the current size can send a new SETTINGS frame. However, the receiver MUST be prepared to receive data that exceeds this window size, since the sender might send data that exceeds the lower limit prior to processing the SETTINGS frame.

A receiver has two options for handling streams that exceed flow control limits:

The receiver can immediately send RST_STREAM with FLOW_CONTROL_ERROR error code for the affected streams. The receiver can accept the streams and tolerate the resulting head of line blocking, sending WINDOW_UPDATE frames as it consumes data.

If a receiver decides to accept streams, both sides MUST recompute the available flow control window based on the initial window size sent in the SETTINGS.

The CONTINUATION frame (type=0x9) is used to continue a sequence of header block fragments (Section 4.3). Any number of CONTINUATION frames can be sent on an existing stream, as long as the preceding frame on the same stream is one of HEADERS, PUSH_PROMISE or CONTINUATION without the END_HEADERS or END_PUSH_PROMISE flag set.

 0 1 2 3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 | Header Block Fragment (*) ...
 +---------------------------------------------------------------+

Figure 12: CONTINUATION Frame Payload

The CONTINUATION frame defines the following flags:

END_HEADERS (0x4): Bit 3 being set indicates that this frame ends a header block (Section 4.3). If the END_HEADERS bit is not set, this frame MUST be followed by another CONTINUATION frame. A receiver MUST treat the receipt of any other type of frame or a frame on a different stream as a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

The payload of a CONTINUATION frame contains a header block fragment (Section 4.3).

The CONTINUATION frame changes the connection state as defined in Section 4.3.

CONTINUATION frames MUST be associated with a stream. If a CONTINUATION frame is received whose stream identifier field is 0x0, the recipient MUST respond with a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

A CONTINUATION frame MUST be preceded by a HEADERS, PUSH_PROMISE or CONTINUATION frame without the END_HEADERS flag set. A recipient that observes violation of this rule MUST respond with a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

HTTP/2 is intended to be as compatible as possible with current web-based applications. This means that, from the perspective of the server business logic or application API, the features of HTTP are unchanged. To achieve this, all of the application request and response header semantics are preserved, although the syntax of conveying those semantics has changed. Thus, the rules from HTTP/1.1 ([HTTP-p1], [HTTP-p2], [HTTP-p4], [HTTP-p5], [HTTP-p6], and [HTTP-p7]) apply with the changes in the sections below.

A client sends an HTTP request on a new stream, using a previously unused stream identifier (Section 5.1.1). A server sends an HTTP response on the same stream as the request.

An HTTP request or response each consist of:

a HEADERS frame; one contiguous sequence of zero or more CONTINUATION frames; zero or more DATA frames; and optionally, a contiguous sequence that starts with a HEADERS frame, followed by zero or more CONTINUATION frames.

The last frame in the sequence bears an END_STREAM flag, though a HEADERS frame bearing the END_STREAM flag can be followed by CONTINUATION frames that carry any remaining portions of the header block.

Other frames MAY be interspersed with these frames, but those frames do not carry HTTP semantics. In particular, HEADERS frames (and any CONTINUATION frames that follow) other than the first and optional last frames in this sequence do not carry HTTP semantics.

Trailing header fields are carried in a header block that also terminates the stream. That is, a sequence starting with a HEADERS frame, followed by zero or more CONTINUATION frames, where the HEADERS frame bears an END_STREAM flag. Header blocks after the first that do not terminate the stream are not part of an HTTP request or response.

An HTTP request/response exchange fully consumes a single stream. A request starts with the HEADERS frame that puts the stream into an "open" state and ends with a frame bearing END_STREAM, which causes the stream to become "half closed" for the client. A response starts with a HEADERS frame and ends with a frame bearing END_STREAM, optionally followed by CONTINUATION frames, which places the stream in the "closed" state.

The 1xx series of HTTP response status codes ([HTTP-p2], Section 6.2) are not supported in HTTP/2.

The most common use case for 1xx is using a Expect header field with a 100-continue token (colloquially, "Expect/continue") to indicate that the client expects a 100 (Continue) non-final response status code, receipt of which indicates that the client should continue sending the request body if it has not already done so.

Typically, Expect/continue is used by clients wishing to avoid sending a large amount of data in a request body, only to have the request rejected by the origin server.

HTTP/2 does not enable the Expect/continue mechanism; if the server sends a final status code to reject the request, it can do so without making the underlying connection unusable.

Note that this means HTTP/2 clients sending requests with bodies may waste at least one round trip of sent data when the request is rejected. This can be mitigated by restricting the amount of data sent for the first round trip by bandwidth-constrained clients, in anticipation of a final status code.

Other defined 1xx status codes are not applicable to HTTP/2; the semantics of 101 (Switching Protocols) is better expressed using a distinct frame type, since they apply to the entire connection, not just one stream. Likewise, 102 (Processing) is no longer necessary, because HTTP/2 has a separate means of keeping the connection alive.

This difference between protocol versions necessitates special handling by intermediaries that translate between them:

  • An intermediary that gateways HTTP/1.1 to HTTP/2 MUST generate a 100 (Continue) response if a received request includes and Expect header field with a 100-continue token ([HTTP-p2], Section 5.1.1), unless it can immediately generate a final status code. It MUST NOT forward the 100-continue expectation in the request header fields.
  • An intermediary that gateways HTTP/2 to HTTP/1.1 MAY add an Expect header field with a 100-continue expectation when forwarding a request that has a body; see [HTTP-p2], Section 5.1.1 for specific requirements.
  • An intermediary that gateways HTTP/2 to HTTP/1.1 MUST discard all other 1xx informational responses.

8.1.2 Examples

This section shows HTTP/1.1 requests and responses, with illustrations of equivalent HTTP/2 requests and responses.

An HTTP GET request includes request header fields and no body and is therefore transmitted as a single contiguous sequence of HEADERS and CONTINUATION frames containing the serialized block of request header fields. The last HEADERS frame in the sequence has both the END_HEADERS and END_STREAM flags set:

 GET /resource HTTP/1.1 HEADERS
 Host: example.org ==> + END_STREAM
 Accept: image/jpeg + END_HEADERS
 :method = GET
 :scheme = https
 :authority = example.org
 :path = /resource
 accept = image/jpeg

Similarly, a response that includes only response header fields is transmitted as a sequence of HEADERS frames containing the serialized block of response header fields. The last HEADERS frame in the sequence has both the END_HEADERS and END_STREAM flag set:

 HTTP/1.1 304 Not Modified HEADERS
 ETag: "xyzzy" ===> + END_STREAM
 Expires: Thu, 23 Jan ... + END_HEADERS
 :status = 304
 etag: "xyzzy"
 expires: Thu, 23 Jan ...

An HTTP POST request that includes request header fields and payload data is transmitted as one HEADERS frame, followed by zero or more CONTINUATION frames containing the request header fields, followed by one or more DATA frames, with the last CONTINUATION (or HEADERS) frame having the END_HEADERS flag set and the final DATA frame having the END_STREAM flag set:

 POST /resource HTTP/1.1 HEADERS
 Host: example.org ==> - END_STREAM
 Content-Type: image/jpeg + END_HEADERS
 Content-Length: 123 :method = POST
 :scheme = https
 {binary data} :authority = example.org
 :path = /resource
 content-type = image/jpeg
 content-length = 123
 DATA
 + END_STREAM
 {binary data}

A response that includes header fields and payload data is transmitted as a HEADERS frame, followed by zero or more CONTINUATION frames, followed by one or more DATA frames, with the last DATA frame in the sequence having the END_STREAM flag set:

 HTTP/1.1 200 OK HEADERS
 Content-Type: image/jpeg ==> - END_STREAM
 Content-Length: 123 + END_HEADERS
 :status = 200
 {binary data} content-type = image/jpeg
 content-length = 123
 DATA
 + END_STREAM
 {binary data}

Trailing header fields are sent as a header block after both the request or response header block and all the DATA frames have been sent. The sequence of HEADERS/CONTINUATION frames that bears the trailers includes a terminal frame that has both END_HEADERS and END_STREAM flags set.

 HTTP/1.1 200 OK HEADERS
 Content-Type: image/jpeg ===> - END_STREAM
 Transfer-Encoding: chunked + END_HEADERS
 TE: trailers :status = 200
 content-length = 123
 123 content-type = image/jpeg
 {binary data}
 0 DATA
 Foo: bar - END_STREAM
 {binary data}
 HEADERS
 + END_STREAM
 + END_HEADERS
 foo: bar

In HTTP/1.1, an HTTP client is unable to retry a non-idempotent request when an error occurs, because there is no means to determine the nature of the error. It is possible that some server processing occurred prior to the error, which could result in undesirable effects if the request were reattempted.

HTTP/2 provides two mechanisms for providing a guarantee to a client that a request has not been processed:

  • The GTFO frame indicates the highest stream number that might have been processed. Requests on streams with higher numbers are therefore guaranteed to be safe to retry.
  • The REFUSED_STREAM error code can be included in a RST_STREAM frame to indicate that the stream is being closed prior to any processing having occurred. Any request that was sent on the reset stream can be safely retried.

Clients MUST NOT treat requests that have not been processed as having failed. Clients MAY automatically retry these requests, including those with non-idempotent methods.

A server MUST NOT indicate that a stream has not been processed unless it can guarantee that fact. If frames that are on a stream are passed to the application layer for any stream, then REFUSED_STREAM MUST NOT be used for that stream, and a GTFO frame MUST include a stream identifier that is greater than or equal to the given stream identifier.

In addition to these mechanisms, the PING frame provides a way for a client to easily test a connection. Connections that remain idle can become broken as some middleboxes (for instance, network address translators, or load balancers) silently discard connection bindings. The PING frame allows a client to safely test whether a connection is still active without sending a request.

HTTP/2 enables a server to pre-emptively send (or "push") multiple associated resources to a client in response to a single request. This feature becomes particularly helpful when the server knows the client will need to have those resources available in order to fully process the originally requested resource.

Pushing additional resources is optional, and is negotiated only between individual endpoints. The SETTINGS_ENABLE_PUSH setting can be set to 0 to indicate that server push is disabled. Even if enabled, an intermediary could receive pushed resources from the server but could choose not to forward those on to the client. How to make use of the pushed resources is up to that intermediary. Equally, the intermediary might choose to push additional resources to the client, without any action taken by the server.

A client cannot push resources. Clients and servers MUST operate as though the server has disabled PUSH_PROMISE by setting the SETTINGS_ENABLE_PUSH to 0. As a consequence, servers MUST treat the receipt of a PUSH_PROMISE frame as a connection error (Section 5.4.1). Clients MUST reject any attempt to change this setting by treating the message as a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

A server can only push requests that are safe (see [HTTP-p2], Section 4.2.1), cacheable (see [HTTP-p6], Section 3) and do not include a request body.

8.2.1 Push Requests

Server push is semantically equivalent to a server responding to a request. The PUSH_PROMISE frame, or frames, sent by the server includes a header block that contains a complete set of request header fields that the server attributes to the request. It is not possible to push a response to a request that includes a request body.

Pushed resources are always associated with an explicit request from a client. The PUSH_PROMISE frames sent by the server are sent on the stream created for the original request. The PUSH_PROMISE frame includes a promised stream identifier, chosen from the stream identifiers available to the server (see Section 5.1.1).

The header fields in PUSH_PROMISE and any subsequent CONTINUATION frames MUST be a valid and complete set of request header fields (Section 8.1.3.1). The server MUST include a method in the :method header field that is safe and cacheable. If a client receives a PUSH_PROMISE that does not include a complete and valid set of header fields, or the :method header field identifies a method that is not safe, it MUST respond with a stream error (Section 5.4.2) of type PROTOCOL_ERROR.

The server SHOULD send PUSH_PROMISE (Section 6.6) frames prior to sending any frames that reference the promised resources. This avoids a race where clients issue requests for resources prior to receiving any PUSH_PROMISE frames.

For example, if the server receives a request for a document containing embedded links to multiple image files, and the server chooses to push those additional images to the client, sending push promises before the DATA frames that contain the image links ensures that the client is able to see the promises before discovering the resources. Similarly, if the server pushes resources referenced by the header block (for instance, in Link header fields), sending the push promises before sending the header block ensures that clients do not request those resources.

PUSH_PROMISE frames MUST NOT be sent by the client. PUSH_PROMISE frames can be sent by the server on any stream that was opened by the client. They MUST be sent on a stream that is in either the "open" or "half closed (remote)" state to the server. PUSH_PROMISE frames are interspersed with the frames that comprise a response, though they cannot be interspersed with HEADERS and CONTINUATION frames that comprise a single header block.

8.2.2 Push Responses

After sending the PUSH_PROMISE frame, the server can begin delivering the pushed resource as a response (Section 8.1.3.2) on a server-initiated stream that uses the promised stream identifier. The server uses this stream to transmit an HTTP response, using the same sequence of frames as defined in Section 8.1. This stream becomes "half closed" to the client (Section 5.1) after the initial HEADERS frame is sent.

Once a client receives a PUSH_PROMISE frame and chooses to accept the pushed resource, the client SHOULD NOT issue any requests for the promised resource until after the promised stream has closed.

If the client determines, for any reason, that it does not wish to receive the pushed resource from the server, or if the server takes too long to begin sending the promised resource, the client can send an RST_STREAM frame, using either the CANCEL or REFUSED_STREAM codes, and referencing the pushed stream's identifier.

A client can use the SETTINGS_MAX_CONCURRENT_STREAMS setting to limit the number of resources that can be concurrently pushed by a server. Advertising a SETTINGS_MAX_CONCURRENT_STREAMS value of zero disables server push by preventing the server from creating the necessary streams. This does not prohibit a server from sending PUSH_PROMISE frames; clients need to reset any promised streams that are not wanted.

Clients receiving a pushed response MUST validate that the server is authorized to push the resource using the same-origin policy ([RFC6454], Section 3). For example, a HTTP/2 connection to example.com is generally not permitted to push a response for www.example.org.

The HTTP pseudo-method CONNECT ([HTTP-p2], Section 4.3.6) is used to convert an HTTP/1.1 connection into a tunnel to a remote host. CONNECT is primarily used with HTTP proxies to establish a TLS session with a server for the purposes of interacting with https resources.

In HTTP/2, the CONNECT method is used to establish a tunnel over a single HTTP/2 stream to a remote host. The HTTP header field mapping works as mostly as defined in Request Header Fields (Section 8.1.3.1), with a few differences. Specifically:

  • The :method header field is set to CONNECT.
  • The :scheme and :path header fields MUST be omitted.
  • The :authority header field contains the host and port to connect to (equivalent to the authority-form of the request-target of CONNECT requests, see [HTTP-p1], Section 5.3).

A proxy that supports CONNECT, establishes a TCP connection [TCP] to the server identified in the :authority header field. Once this connection is successfully established, the proxy sends a HEADERS frame containing a 2xx series status code, as defined in [HTTP-p2], Section 4.3.6.

After the initial HEADERS frame sent by each peer, all subsequent DATA frames correspond to data sent on the TCP connection. The payload of any DATA frames sent by the client are transmitted by the proxy to the TCP server; data received from the TCP server is assembled into DATA frames by the proxy. Frame types other than DATA or stream management frames (RST_STREAM, WINDOW_UPDATE, and PRIORITY) MUST NOT be sent on a connected stream, and MUST be treated as a stream error (Section 5.4.2) if received.

The TCP connection can be closed by either peer. The END_STREAM flag on a DATA frame is treated as being equivalent to the TCP FIN bit. A client is expected to send a DATA frame with the END_STREAM flag set after receiving a frame bearing the END_STREAM flag. A proxy that receives a DATA frame with the END_STREAM flag set sends the attached data with the FIN bit set on the last TCP segment. A proxy that receives a TCP segment with the FIN bit set sends a DATA frame with the END_STREAM flag set. Note that the final TCP segment or DATA frame could be empty.

A TCP connection error is signaled with RST_STREAM. A proxy treats any error in the TCP connection, which includes receiving a TCP segment with the RST bit set, as a stream error (Section 5.4.2) of type CONNECT_ERROR. Correspondingly, a proxy MUST send a TCP segment with the RST bit set if it detects an error with the stream or the HTTP/2 connection.

10.1 Server Authority and Same-Origin

This specification uses the same-origin policy ([RFC6454], Section 3) to determine whether an origin server is permitted to provide content.

A server that is contacted using TLS is authenticated based on the certificate that it offers in the TLS handshake (see [RFC2818], Section 3). A server is considered authoritative for an "https" resource if it has been successfully authenticated for the domain part of the origin of the resource that it is providing.

A server is considered authoritative for an "http" resource if the connection is established to a resolved IP address for the domain in the origin of the resource.

A client MUST NOT use, in any way, resources provided by a server that is not authoritative for those resources.

10.2 Cross-Protocol Attacks

When using TLS, we believe that HTTP/2 introduces no new cross-protocol attacks. TLS encrypts the contents of all transmission (except the handshake itself), making it difficult for attackers to control the data which could be used in a cross-protocol attack.

10.3 Intermediary Encapsulation Attacks

HTTP/2 header field names and values are encoded as sequences of octets with a length prefix. This enables HTTP/2 to carry any string of octets as the name or value of a header field. An intermediary that translates HTTP/2 requests or responses into HTTP/1.1 directly could permit the creation of corrupted HTTP/1.1 messages. An attacker might exploit this behavior to cause the intermediary to create HTTP/1.1 messages with illegal header fields, extra header fields, or even new messages that are entirely falsified.

An intermediary that performs translation into HTTP/1.1 cannot alter the semantics of requests or responses. In particular, header field names or values that contain characters not permitted by HTTP/1.1, including carriage return (U+000D) or line feed (U+000A) MUST NOT be translated verbatim, as stipulated in [HTTP-p1], Section 3.2.4.

Translation from HTTP/1.x to HTTP/2 does not produce the same opportunity to an attacker. Intermediaries that perform translation to HTTP/2 MUST remove any instances of the obs-fold production from header field values.

10.4 Cacheability of Pushed Resources

Pushed resources are responses without an explicit request; the request for a pushed resource is synthesized from the request that triggered the push, plus resource identification information provided by the server. Request header fields are necessary for HTTP cache control validations (such as the Vary header field) to work. For this reason, caches MUST associate the request header fields from the PUSH_PROMISE frame with the response headers and content delivered on the pushed stream. This includes the Cookie header field.

Caching resources that are pushed is possible, based on the guidance provided by the origin server in the Cache-Control header field. However, this can cause issues if a single server hosts more than one tenant. For example, a server might offer multiple users each a small portion of its URI space.

Where multiple tenants share space on the same server, that server MUST ensure that tenants are not able to push representations of resources that they do not have authority over. Failure to enforce this would allow a tenant to provide a representation that would be served out of cache, overriding the actual representation that the authoritative tenant provides.

Pushed resources for which an origin server is not authoritative are never cached or used.

10.5 Denial of Service Considerations

An HTTP/2 connection can demand a greater commitment of resources to operate than a HTTP/1.1 connection. The use of header compression and flow control depend on a commitment of resources for storing a greater amount of state. Settings for these features ensure that memory commitments for these features are strictly bounded. Processing capacity cannot be guarded in the same fashion.

The SETTINGS frame can be abused to cause a peer to expend additional processing time. This might be done by pointlessly changing settings, setting multiple undefined settings, or changing the same setting multiple times in the same frame. Similarly, WINDOW_UPDATE or PRIORITY frames can be abused to cause an unnecessary waste of resources.

Large numbers of small or empty frames can be abused to cause a peer to expend time processing frame headers. Note however that some uses are entirely legitimate, such as the sending of an empty DATA frame to end a stream.

Header compression also offers some opportunities to waste processing resources, see [COMPRESSION] for more details on potential abuses.

In all these cases, there are legitimate reasons to use these protocol mechanisms. These features become a burden only when they are used unnecessarily or to excess.

An endpoint that doesn't monitor this behavior exposes itself to a risk of denial of service attack. Implementations SHOULD track the use of these types of frames and set limits on their use. An endpoint MAY treat activity that is suspicious as a connection error (Section 5.4.1) of type ENHANCE_YOUR_CALM.

Padding of data frames can be used to hide the exact size of the content of those frames. Aside from inherent usefulness, padding can be used to mitigate a class of attack where compressed content includes both attacker-controlled plaintext and secret data (for example, [BREACH]).

However, use of padding can result in less protection than might seem immediately obvious. In particular, randomized amounts of padding only increase the number of frames that an attacker has to observe to recover the length. Padding to a constant size is preferable, since that reveals minimal size information. For attacks based on compression, disabling compression might be preferable to use of padding.

Intermediaries SHOULD NOT remove padding; though an intermediary could remove padding and add differing amounts if the intent is to improve the protections on length.

[COMPRESSION]Ruellan, H. and R. Peon, “HPACK - Header Compression for HTTP/2.0”, Internet-Draft draft-ietf-httpbis-header-compression-05 (work in progress), December 2013. [COOKIE]Barth, A., “HTTP State Management Mechanism”, RFC 6265, April 2011. [HTTP-p1]Fielding, R., Ed. and J. Reschke, Ed., “Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing”, Internet-Draft draft-ietf-httpbis-p1-messaging-25 (work in progress), November 2013. [HTTP-p2]Fielding, R., Ed. and J. Reschke, Ed., “Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content”, Internet-Draft draft-ietf-httpbis-p2-semantics-25 (work in progress), November 2013. [HTTP-p4]Fielding, R., Ed. and J. Reschke, Ed., “Hypertext Transfer Protocol (HTTP/1.1): Conditional Requests”, Internet-Draft draft-ietf-httpbis-p4-conditional-25 (work in progress), November 2013. [HTTP-p5]Fielding, R., Ed., Lafon, Y., Ed., and J. Reschke, Ed., “Hypertext Transfer Protocol (HTTP/1.1): Range Requests”, Internet-Draft draft-ietf-httpbis-p5-range-25 (work in progress), November 2013. [HTTP-p6]Fielding, R., Ed., Nottingham, M., Ed., and J. Reschke, Ed., “Hypertext Transfer Protocol (HTTP/1.1): Caching”, Internet-Draft draft-ietf-httpbis-p6-cache-25 (work in progress), November 2013. [HTTP-p7]Fielding, R., Ed. and J. Reschke, Ed., “Hypertext Transfer Protocol (HTTP/1.1): Authentication”, Internet-Draft draft-ietf-httpbis-p7-auth-25 (work in progress), November 2013. [RFC2119]Bradner, S., “Key words for use in RFCs to Indicate Requirement Levels”, BCP 14, RFC 2119, March 1997. [RFC2818]Rescorla, E., “HTTP Over TLS”, RFC 2818, May 2000. [RFC3986]Berners-Lee, T., Fielding, R., and L. Masinter, “Uniform Resource Identifier (URI): Generic Syntax”, STD 66, RFC 3986, January 2005. [RFC4648]Josefsson, S., “The Base16, Base32, and Base64 Data Encodings”, RFC 4648, October 2006. [RFC5226]Narten, T. and H. Alvestrand, “Guidelines for Writing an IANA Considerations Section in RFCs”, BCP 26, RFC 5226, May 2008. [RFC5234]Crocker, D. and P. Overell, “Augmented BNF for Syntax Specifications: ABNF”, STD 68, RFC 5234, January 2008. [RFC6454]Barth, A., “The Web Origin Concept”, RFC 6454, December 2011. [TCP]Postel, J., “Transmission Control Protocol”, STD 7, RFC 793, September 1981. [TLS-EXT]Eastlake, D., “Transport Layer Security (TLS) Extensions: Extension Definitions”, RFC 6066, January 2011. [TLS12]Dierks, T. and E. Rescorla, “The Transport Layer Security (TLS) Protocol Version 1.2”, RFC 5246, August 2008. [TLSALPN]Friedl, S., Popov, A., Langley, A., and E. Stephan, “Transport Layer Security (TLS) Application Layer Protocol Negotiation Extension”, Internet-Draft draft-ietf-tls-applayerprotoneg-04 (work in progress), January 2014.

Adding padding for data frames.

Renumbering frame types, error codes, and settings.

Adding INADEQUATE_SECURITY error code.

Updating TLS usage requirements to 1.2; forbidding TLS compression.

Removing extensibility for frames and settings.

Changing setting identifier size.

Removing the ability to disable flow control.

Changing the protocol identification token to "h2".

Changing the use of :authority to make it optional and to allow userinfo in non-HTTP cases.

Allowing split on 0x0 for Cookie.

Reserved PRI method in HTTP/1.1 to avoid possible future collisions.

Added cookie crumbling for more efficient header compression.

Added header field ordering with the value-concatenation mechanism.

Marked draft for implementation.

Adding definition for CONNECT method.

Constraining the use of push to safe, cacheable methods with no request body.

Changing from :host to :authority to remove any potential confusion.

Adding setting for header compression table size.

Adding settings acknowledgement.

Removing unnecessary and potentially problematic flags from CONTINUATION.

Added denial of service considerations.

Marking the draft ready for implementation.

Renumbering END_PUSH_PROMISE flag.

Editorial clarifications and changes.

Added CONTINUATION frame for HEADERS and PUSH_PROMISE.

PUSH_PROMISE is no longer implicitly prohibited if SETTINGS_MAX_CONCURRENT_STREAMS is zero.

Push expanded to allow all safe methods without a request body.

Clarified the use of HTTP header fields in requests and responses. Prohibited HTTP/1.1 hop-by-hop header fields.

Requiring that intermediaries not forward requests with missing or illegal routing :-headers.

Clarified requirements around handling different frames after stream close, stream reset and GTFO.

Added more specific prohibitions for sending of different frame types in various stream states.

Making the last received setting value the effective value.

Clarified requirements on TLS version, extension and ciphers.

Committed major restructuring atrocities.

Added reference to first header compression draft.

Added more formal description of frame lifecycle.

Moved END_STREAM (renamed from FINAL) back to HEADERS/DATA.

Removed HEADERS+PRIORITY, added optional priority to HEADERS frame.

Added PRIORITY frame.

Added continuations to frames carrying header blocks.

Replaced use of "session" with "connection" to avoid confusion with other HTTP stateful concepts, like cookies.

Removed "message".

Switched to TLS ALPN from NPN.

Editorial changes.

Added IANA considerations section for frame types, error codes and settings.

Removed data frame compression.

Added PUSH_PROMISE.

Added globally applicable flags to framing.

Removed zlib-based header compression mechanism.

Updated references.

Clarified stream identifier reuse.

Removed CREDENTIALS frame and associated mechanisms.

Added advice against naive implementation of flow control.

Added session header section.

Restructured frame header. Removed distinction between data and control frames.

Altered flow control properties to include session-level limits.

Added note on cacheability of pushed resources and multiple tenant servers.

Changed protocol label form based on discussions.

Adopted as base for draft-ietf-httpbis-http2.

Updated authors/editors list.

Added status note.

Druid | Real²time Exploratory Analytics on Large Datasets

Animated Engines - Home

Exclusive: Google to Buy Artificial Intelligence Startup DeepMind for $400M | Re/code

$
0
0

Comments:"Exclusive: Google to Buy Artificial Intelligence Startup DeepMind for $400M | Re/code"

URL:http://recode.net/2014/01/26/exclusive-google-to-buy-artificial-intelligence-startup-deepmind-for-400m/


Google is shelling out $400 million to buy a secretive artificial intelligence company called DeepMind.

Google confirmed the deal after Re/code inquired about it, but declined to specify a price.

Based in London, DeepMind was founded by games prodigy and neuroscientist Demis Hassabis, along with Shane Legg and Mustafa Suleyman.

This is in large part an artificial intelligence talent acquisition, and Google CEO Larry Page led the deal himself, sources said. According to online bios, Hassabis in particular is quite a talent, a child prodigy in chess who was later called “probably the best games player in history” by the Mind Sports Olympiad.

DeepMind has only a landing page for a website where it describes its business as building learning algorithms for simulations, e-commerce and games. Profiles on LinkedIn indicate the company is about three years old.

Sources said Founders Fund is a major investor in DeepMind, along with Horizons Ventures. Skype and Kazaa developer Jaan Tallinn was an investor and advisor (not a founder, as we initially described him).

Other Google AI experts include Jeff Dean and Anna Patterson. The company also recently hired Geoffrey Hinton part-time.

Update: Though DeepMind may not be a household name in tech, sources in the artificial intelligence community describe the company as a formidable AI player and say it has been aggressively recruiting in the space. One source said DeepMind has a team of at least 50 people and has secured more than $50 million in funding. This person described DeepMind as “the last large independent company with a strong focus on artificial intelligence,” and said it competed with companies like Google, Facebook and Baidu for talent.

Multiple sources said the company has been developing a variety of approaches to AI, and applying them to various potential products including a recommendation system for e-commerce.

The Grand C++ Error Explosion Competition — Results of the Grand C++ Error Explosion Competition

$
0
0

Comments:"The Grand C++ Error Explosion Competition — Results of the Grand C++ Error Explosion Competition"

URL:http://tgceec.tumblr.com/post/74534916370


After much deliberation, the winners of the Grand C++ Error Explosion Competition are finally selected. There are two different award categories. The winners of the first category are those submissions that produced the largest error with the smallest amount of source code. These entries contain a multiplier number, which is just the size of the error message divided by the size of the source code. The second category is artistic merit.

Some of the code samples shown will overflow when displayed on the web. We apologize for any inconvenience this may cause.

Biggest error, category Anything

Name: Turtles all the way down
Author: Ed Hanway
Multiplier: 5.9 billion

This entry is the best implementation of the most common pattern, the double include self. Here’s what it looks like:

#include ".//.//.//.//jeh.cpp"
#include "jeh.cpp"
`

This implementation produced almost six times the amount of error messages of the second best entry of the same type.

Biggest error, Category Plain

Name: y u no backreference?
Author: Chris Hopman
Multiplier: 790 million

The rules permitted includes in the plain category, so obviously the double include was used in this category as well.

#include "set>.cpp"
#include "set>.cpp"

Biggest error, category Bare Hands

Name: Const Variadic Templates
Author: Marc Aldorasi
Multiplier: 657 million

The bare hands category did not allow for any use of the preprocessor, which lead most people to use recursive or variadic template initiations. This entry was the most compact of the lot.

template<class T,class...>class C{C<T*const,T,C>a;C<T,C>b;};C<int>c;

Best cheat

Name: What's perl?
Author: Chris Hopman

There were several interesting cheat attempts in this competition. For example did you know that in C++ digraph expansion happens after line continuation expansion? We sure did not.

Many tried to exploit a division by zero bug in the verification script. One submission even had a patch for this, which tried to changed the evaluator script to make their entry evaluate to infinite error. The best cheat went in a completely different direction, however.

The actual code consisted of only one space. Since this entry was in the anything category, it was accompanied by a list of header search directories. That file looked like this.

/usr/include; perl -e "@c=\"x\"x(2**16); while(1) {print @c}" 1>&2

When passed to the C++ compiler invocation line, this allows the shell code to escape the test harness sandbox. Extra credit for using Perl, which is the only language less readable than C++ templates.

Most surprising

Name: templates and nested classes are not best practice
Author: Aaron Grothe

This piece of code looks innocent but explodes in a completely unexpected manner. We also tested this with Clang, which detects correctly the missing semicolon, after which it anyway tries to evaluate the infinite template recursion and eventually segfaults. This entry gives a glimpse on the kinds of issues an IDE’s code completion engine needs to guard against.

template<class T>class L{L<T*>operator->()};L<int>i=i->

Most lifelike

Name: Bjarne's nightmare
Author: Victor Zverovich

Suppose you are given a task of adding some new functionality to an existing code base. You have been told that the guy who wrote it was “really smart” and that his code is of “enterprise quality”. You check out the code and open a random file in an editor. It appears on the screen. After just one microsecond of looking at the code you have lost your will to live and want nothing more than to beat your head against the table until you lose consciousness.

This entry could be that code. We’re glad we only needed to measure it rather than to understand and alter it.

#include <map>
#include <algorithm>
template<class T,class U>void f(T,U u){std::vector<std::vector<T>>v;auto i=end(v);find(i,i,u);find(i,i,&u);U*p,**q,r(),s(U);find(i,i,&p);find(i,i,&q);find(i,i,r);find(i,i,&r);find(i,i,s);find(i,i,&s);}template<class T>void f(T t){f(t,0);f(t,0l);f(t,0u);f(t,0ul);f(t,0ll);f(t,.0);f(t,.0l);f(t,.0f);f(t,' ');f(t,L' ');f(t,u' ');f(t,U' ');f(t,"");f(t,L"");}int main(){f(0);f(0l);f(0u);f(0ul);f(0ll);f(.0);f(.0l);f(.0f);f(' ');f(L' ');f(u' ');f(U' ');f("");f(L"");f(u"");f(U"");}

Barest hands

Title: whatever
Author: John Regehr

This entry does not have any template definitions or include recursion and yet it put up an admirable fight. This serves as an important reminder to all of us: when used correctly even the simplest of tools can be used to build impressive results.

struct x struct z<x(x(x(x(x(x(x(x(x(x(x(x(x(x(x(x(x(y,x(y><y*,x(y*w>v<y*,w,x{}

Epilogue

We would like to thank all people who participated in the competition. We hope that all participants as well as you readers have enjoyed this experience.

The final question now remaining is whether there will be a second TGCEEC next year?

The answer to this is simple: yes, if you, the people, demand it.

Till we meet again.

Python on Wheels | Armin Ronacher's Thoughts and Writings

$
0
0

Comments:"Python on Wheels | Armin Ronacher's Thoughts and Writings"

URL:http://lucumr.pocoo.org/2014/1/27/python-on-wheels/


written on Monday, January 27, 2014

The python packaging infrastructure has long received criticism from both Python developers as well as system administrators. For a long time even the Python community in itself could not agree on what exactly the tools to use were. We had distutils, setuptools, distribute, distutils2 as basic distribution mechanisms and then virtualenv, buildout, easy_install and pip as high level tools to deal with this mess.

As distribution formats before setuptools we had source files and for Windows there were some binary distributions in form of MSIs. On Linux we had bdist_dumb which was historically broken and bdist_rpm which only worked on Red Hat based systems. But even bdist_rpm did not actually work good enough that people were actually using it.

A few years ago PJE stepped up and tried to fix the initial distribution problems by providing the mix of setuptools + pkg_resources to improve distutils and to provide metadata for Python packages. In addition to that he wrote the easy_install tool to install packages. In lack of a distribution format that supported the required metadata, the egg format was invented.

Python eggs are basically zip packages that include the python package plus the metadata that is required. Even though many people probably never built eggs intentionally, the egg metadata is still alive and kicking and everybody deploys things through setuptools now.

Now unfortunately a few years ago the community split in half and part of the community declared the death to binary distributions and eggs. When that happened the replacement for easy_install (pip) stopped accepting eggs altogether.

Fast forward a few years later. The removal of binary distributions has become noticed very painfully as people started more and more cloud deployment and having to recompile C libraries on every single machine is no fun. Because eggs at that point were poorly understood I assume, they were reimplemented on top of newer PEPs and called wheels.

As a general information before we dive in: I'm assuming that you are in all cases operating out of a virtualenv.

What are Wheels

So let's start simple. What exactly are wheels and what's the difference to eggs? Both eggs and wheels are basically just zip files. The main difference is that you could import eggs without having to unpack them. Wheels on the other hand are just distribution archives that you need to unpack upon installation. While there are technically no reasons for wheels not to be importable, that was never the plan to begin with and there is currently no support for importing wheels directly.

The other main difference is that eggs included compiled python bytecode whereas wheels do not. The biggest advantage of this is that you don't need to make wheels for different Python versions for as long as you don't ship binary modules that link against libpython. On newer Python 3 versions you can actually even safely link against libpython for as long as you only use the stable ABI.

There are a few problems with wheels however. One of the problems is that wheels inherit some of the problems that egg already had. For instance Linux binary distributions are still not an option for most people because of two basic problems: Python itself being compiled in different forms on Linux and modules being linked against different system libraries. The first problem is caused by Python 2 coming in two flavours that are both incompatible to each other: UCS2 Pythons and UCS4 Pythons. Depending on which mode Python is compiled with the ABI looks different. Presently the wheel format (from what I can tell) does not annotate for which Python unicode mode a library is linked. A separate problem is that Linux distributions are less compatible to each other as you would wish and concerns have been brought up that wheels compiled on one distribution will not work on others.

The end effect of this is that you presently cannot upload binary wheels to PyPI on concerns of incompatibility with different setups.

In addition to that wheel currently only knows two extremes: binary packages and pure Python packages. When something is a binary package it's specific to a Python version on 2.x. Right now that's actually not the worst thing in the world because Python 2.x is end of life and we really only need to build packages for 2.7 for a long time to come. If however we would start considering Python 2.8 then it would be interesting to have a way to say: this package is Python version independent but it ships binaries so it needs to be architecture specific.

The reason why you might have a package like this are packages that ship shared libraries loaded with ctypes of CFFI. These libraries do not link against libpython and as such would work cross Python (even cross Python implementation which means you can use them with pypy).

On the bright side: nothing stops yourself from using binary wheels for your own homogenous infrastructure.

Building Wheels

So now that you know what a wheel is, how do you make one? Building a wheel out of your own libraries is a very straightforward process. All you need to do is using a recent version of setuptools and the wheel library. Once you have both installed you can build a wheel out of your package by running this command:

$ python setup.py bdist_wheel

This will throw a wheel into your distribution folder. There are however one extra things you should be aware of and that's what happens if you ship binaries. By default the wheel you build (assuming you don't use any binary build steps as part of your setup.py) is to produce a pure Python wheel. This means that even if you ship a .so, .dylib or .dll as part of your package data the wheel spit out will look like it's platform independent.

The solution for this problem is to manually subclass the setuptools distribution to flip the purity flag to false:

importosfromsetuptoolsimportsetupfromsetuptools.distimportDistributionclassBinaryDistribution(Distribution):defis_pure(self):returnFalsesetup(...,include_package_data=True,distclass=BinaryDistribution,)

Installing Wheels

Now you have a wheel, how do you install it? On a recent pip version you can install it this way:

$ pip install package-1.0-cp27-none-macosx_10_7_intel.whl

But what about your dependencies? This is what it gets a bit tricker. Generally what you would want is to install a package without ever connecting to the internet. Pip thankfully supports that by disabling downloading from an index and by providing a path to a folder for all the things it needs to install. So assuming you have all the wheels for all your dependencies in just the right version available, you can do this:

$ pip install --no-index --find-links=path/to/wheels package==1.0

This will then install the 1.0 version of package into your virtualenv.

Wheels for Dependencies

Alright, but what if you don't have the wheels for your dependencies? Pip in theory supports doing that through the wheel command. In theory this is supposed to work:

pip wheel --wheel-dir=path/to/wheels package==1.0

In this case wheel will throw all packages that package depends on into the given folder. There are two problems with this.

The first one is that the command currently has a bug and does not actually throw dependencies into the wheel folder if the dependencies are already wheels. What the command is supposed to do is to collect all the dependencies and the convert them into wheels if necessary and then places them in the wheel folder. What's actually happening though is that it only places wheels there for things that were not wheels to begin with. So if a dependency is already available as a wheel on PyPI then pip will skip it and not actually put it there.

The workaround is a shell script that goes through the download cache and manually moves downloaded wheels into the wheel directory:

#!/bin/sh
WHEEL_DIR=path/to/wheels
DOWNLOAD_CACHE_DIR=path/to/cache
rm -rf $DOWNLOAD_CACHE_DIR
mkdir -p $DOWNLOAD_CACHE_DIR
pip wheel --use-wheel -w "$WHEEL_DIR" -f "$WHEEL_DIR" \
 --download-cache "$DOWNLOAD_CACHE_DIR" package==1.0
for x in "$DOWNLOAD_CACHE_DIR/"*.whl; do
 mv "$x" "$WHEEL_DIR/${x##*%2F}"
done

The second problem is more severe. How can pip wheel find your own package if it's not on PyPI? The answer is: it cannot. So what the documentation generally recommends is to not run pip wheel package but to run pip wheel -r requirements.txt where requirements.txt includes all the dependencies of the package. Once that is done, manually copy your own package's wheel in there and distribute the final wheel folder.

DevPI Based Package Building

That workaround with depending on the requirements certainly works in simple situations, but what do you do if you have multiple in-house Python packages that depend on each other? It quickly falls apart.

Thankfully Holker Krekel sat down last year and build a solution for this problem called devpi. DevPI is essentially a practical hack around how pip interacts with PyPI. Once you have DevPI installed on your own computer it acts as a transparent proxy in front of PyPI and you can point pip to install from your local DevPI server instead of the public PyPI. Not only that, it also automatically caches all packages downloaded from PyPI locally so even if you kill your network connection you can continue downloading those packages as if PyPI was still running. In addition to being a proxy you can also upload your own packages into that local server so once you point pip to that server it will both find public packages as well as your own ones.

In order to use DevPI I recommend making a local virtualenv and installing it into that and then linking devpi-server and devpi into your search path (in my case ~/.local/bin is on my PATH):

$ virtualenv devpi-venv
$ devpi-venv/bin/pip install --ugprade pip wheel setuptools devpi
$ ln -s `pwd`/devpi-venv/bin/devpi ~/.local/bin
$ ln -s `pwd`/devpi-venv/bin/devpi-server ~/.local/bin

Afterwards all you need to do is to start devpi-server and it will continue running until you shut it down or reboot your computer:

$ devpi-server --start

Once it's running you need to initialize it once:

$ devpi use http://localhost:3141
$ devpi user -c $USER password=
$ devpi login $USER --password=
$ devpi index -c yourproject

In this case because I use DevPI locally for myself only I use the same name for the DevPI user as I use for my system. As the last step I create an index named after my project. You can have multiple indexes next to each other to separate your work.

To point pip to your DevPI you can export an environment variable:

$ export PIP_INDEX_URL=http://localhost:3141/$USER/yourproject/+simple/

Personally I place this in the postactivate script of my virtualenv to not accidentally download from the wrong DevPI index.

To place your own wheels on your local DevPI you can use the devpi binary:

$ devpi use yourproject
$ devpi upload --no-vcs --formats=bdist_wheel

The --no-vcs flag disables some magic in DevPI which tries to detect your version control system and moves some files off first. Personally this does not work for me because I ship files in my projects that I do not want to put into version control (like binaries).

Lastly I would strongly recommend breaking your setup.py files in a way that PyPI will reject them but DevPI will accept them to not accidentally release your code with setup.py release. The easiest way to accomplish this is to add an invalid PyPI trove classifier to your setup.py:

setup(...classifier=['Private :: Do Not Upload'],)

Wrapping it Up

Now with all that done you can start inter depending on your own private packages and build out wheels in one go. Once you have that, you can zip them up and upload them to another server and install them into a separate virtualenv.

All in all this whole process will get a bit simpler when the pip wheel command stops ignoring already existing wheels. Until then, a shell script is not the worst workaround.

Comparing to Eggs

Wheels currently seem to have more traction than eggs. The development is more active, PyPI started to add support for them and because all the tools start to work for them it seems to be the better solution. Eggs currently only work if you use easy_install instead of pip which seems to be something very few people still do.

I assume the Zope community is still largely based around eggs and buildout and I assume if an egg based deployment works for you, then that's the way to go. I know that many did not actually use eggs at all to install Python packages and instead built virtualenvs, zipped them up and sent them to different servers. For that kind of deployment, wheels are definitely a much superior solution because it means different servers can have the libraries in different paths. This previously was an issue because the .pyc files were created on the build server for the virtualenv and the .pyc files include the filenames.

With wheels the .pyc files are created upon installation into the virtualenv and will automatically include the correct paths.

So there you have it. Python on wheels. It's there, it kinda works, and it's probably worth your time.

This entry was taggeddeployment, python and thoughts


One Man Has Written Virtually Every Major Pop Song Of The Last 20 Years. And You've Probably Never Heard His Name | Celebrity Net Worth

$
0
0

Comments:"One Man Has Written Virtually Every Major Pop Song Of The Last 20 Years. And You've Probably Never Heard His Name | Celebrity Net Worth"

URL:http://www.celebritynetworth.com/articles/entertainment-articles/max-martin-powerful-person-music-business-people-never-even-head-name/


Pop quiz: Who has been the most influential and powerful person in pop music over the last 20 years? Britney Spears? Madonna? Michael Jackson? Beyonce? Simon Cowell? Sure, these people have all been extraordinarily successful, but what if we told you that there's one man whose musical accomplishments easily trumps all of these artists? And what if we told you that most people have probably never even heard of this person and definitely wouldn't recognize him walking down the street? Sounds impossible right? Well, if at any point in the last 20 years you've heard a song from The Backstreet Boys, Britney Spears, N'Sync, Kelly Clarkson, Taylor Swift, Ace of Base, Katy Perry, Celine Dion, Bon Jovi, Adam Lambert, Carrie Underwood, Pink, Justin Bieber… then you've unwittingly been subject to the creations of a Swedish born musical genius who goes by the name of Max Martin. Believe it or not, over the past 15-20 years Max Martin has been the brains, ears and talent behind virtually every hit pop song that has been released to the screaming masses. He's personally responsible for churning out more Billboard singles than Michael Jackson and Madonna COMBINED. And since he is essentially unrecognizable to the average person, that might make him the most famous, non-famous person on the planet. This is his story.

Who is Max Martin?

This past April, the Backstreet Boys celebrated the 20-year anniversary of their career by being inducted into the Hollywood Hall of Fame. On hand to give remarks was an un-remarkable looking man named Max Martin. Martin, with his Seth Rogen-esque neck beard and hair down to his shoulders, wrote many of the BSB's hit songs. During his speech, Martin reminisced fondly of his first encounter with the Boys. He met the future superstars in a hip Stockholm restaurant back in the mid-nineties. He told them he was excited to hear them sing in the studio later that evening. In response, the band stood up and serenaded him on the spot. "I got the goosebumps. It was amazing". For millions of people around the world, this would have been the event of a lifetime. But for Martin, it was just another day in the life.

So who is Max Martin aka Martin Karl Sandberg? Was he a huge rock star in Sweden? Not really. A powerful record executive? Nope. A Svengali-like manager who knows all the secrets to success in the music business? Hardly. If he was any one of these things, it would be hard to explain why the most popular song produced by his own band, "It's Alive", has fewer than 1000 views on Youtube today.

Max Martin – Life Story

Max may not have gotten super famous with his hard rock band "It's Alive", but he did receive two important gifts from the experience: 1) When the band recorded their first only album, they hired a producer named Denniz PoP and Denniz is the person who suggested that "Martin Karl Sandberg" should change his name forever to Max Martin. 2) While recording their album, Max astutely learned all the tricks of the producing trade from Denniz. In an interview years later, Max explained: "I didn't even know what a producer did. I spent two years – day and night – in that studio trying to learn what the hell was going on."

Max Martin with Britney Spears

When "It's Alive" flopped at record stores, Martin immediately decided that he really belonged on the other side of the glass in a recording studio. He began assisting Denniz Pop as a sound engineer and song writer for all the incoming talent. One of the first projects that Martin collaborated on was fellow Swedish band Ace of Base and their 1995 album "The Bridge". This album eventually sold six million copies worldwide. As big of a success as that was, it was his next project that catapulted Martin to astronomical success. Impressed by his songwriting and producing work, an A&R executive from the record company Jive decided Max was the perfect person to work on the debut album of a fresh young boy band called The Backstreet Boys. Martin co-wrote the singles "Quit Playing Games (With My Heart)," "As Long as You Love Me," and "Everybody (Backstreet's Back)", arguably the three most popular songs on the band's self-titled debut album. The album went on to sell more than 10 million copies worldwide.

What is perhaps most remarkable about Martin's career is the consistency with which he churns out the songs that find their way into the ears of millions of people around the globe. Britney Spears, Celine Dion, Christina Aguilera, Kelly Clarkson, Pink, Avril Lavigne. More than half of the Backstreet Boys' 1999 album Millennium. The list goes on and on and on and on. Since 1999, Max has written or co-written 16 songs that hit #1 on Billboard. There aren't enough metaphors to properly describe his success. He's like the Michael Jordan, Wayne Gretzky, Roger Federer, Muhammad Ali, Michael Phelps, Usain Bolt… of music. Here is a list of some of the biggest pop songs that Max Martin has written and produced:

Katy Perry: "I Kissed a Girl", "Teenage Dream", "California Gurls", "Roar", "Dark Horse"

Britney Spears: "Oops!… I Did It Again", "Stronger", "…Baby One More Time", "Till The World Ends"

Backstreet Boys: "Quit Playing Games With My Heart", "I Want It That Way", "Larger Than Life", "As Long As You Love Me", "Shape of my Heart"

Kelly Clarkson: "Since You've Been Gone", "My Life Would Suck Without You"

N'Sync: "Tearin' Up My Heart", "It's Gonna Be Me", "I Want You Back"

Taylor Swift: "We Are Never Ever Getting Back Together", "I Knew You Were Trouble", "22″

If you are a Spotify user, I compiled a playlist of Max Martin's biggest and most popular hit songs. To listen, just paste the following text into your Spotify search box:

spotify:user:bluetahoe99:playlist:0HYEI6ov3lAf87Y5EUKqdq

Max Martin ASCAP Awards

For Martin, it's the fun of the work that keeps him going: "If I did it because it was my job, and I only did it to make money, I don't think I'd still be doing it." Which isn't to say that the man behind countless hits hasn't found financial success. From the royalties and producing fees he has received from all of this fun he's been having, Max Martin has earned a personal net worth of $250 million dollars. That's right. He's worth a quarter of a billion dollars and you've probably never heard his name.

Max has won ASCAP's Songwriter of the Year award six times. His most recent #1 single is Katy Perry's "Roar" which has been nominated for several Grammys including Song of the Year. He also contributed to the album "Red" by Taylor Swift which has been nominated for Album of the Year. Whether you love the songs or hate them, whether you can admit that behind the corniness of "As Long as You Love Me" is a pop mastermind, or not, one thing is for certain: Max Martin isn't going anywhere anytime soon. Maybe if the accolades and records keep piling up, people might actually start knowing his name and face!

Max Martin Articles

Max Savage Levenson

A recent East Coast transplant, Max is currently studying at the UC Berkeley School of Journalism, where he reports on the eccentric music community of the Bay Area and the state of the Oakland School District. Before arriving in California, he lived in Baltimore and Paris, where he documented music on the page and on the screen. Follow him on Google+.

Samsung and Google Sign Global Patent License Agreement | Samsung Electronics Official Blog: Samsung Tomorrow

Happy 40th birthday, Dungeons & Dragons! | Plugged In - Yahoo Games

$
0
0

Comments:"Happy 40th birthday, Dungeons & Dragons! | Plugged In - Yahoo Games"

URL:http://games.yahoo.com/blogs/plugged-in/happy-40th-birthday-dungeons-dragons-214544046.html


Dungeons & Dragons, the tabletop game that introduced fantasy role-playing to most of the world, turns the big four-oh this weekend -- and it's showing no signs of slowing down.

While D&D is almost a recurring character on some prime time TV shows these days (hat tip to you, producers of "The Big Bang Theory"), it spent years as the calling card of the socially awkward. No matter how fun the game is, toting around bags of polyhedral dice and regularly consulting the Monster Manual for hit-point data was never a way to get in with the cool kids.

For fans, though, that never mattered. The journey into the game's imaginative world was a respite from the pressures of the real world, a chance to be the hero and save the day.

D&D wasn't the first game to explore role-playing, but it was the first to do so in a non-wargaming setting and the first to break big. Drawing upon the work of authors like J.R.R. Tolkien and H.P Lovecraft, countless mythologies, and their own vivid imaginations, creators Gary Gygax and Dave Arneson launched the groundbreaking game on January 26, 1974, the best date we have for the game’s official birth.

The original D&D set (Credit: Wizards of the Coast)

The first version of the game -- called OD&D, for original Dungeons & Dragons -- was a small set of three books. It sold just 1,000 copies in 1974, but tripled sales the next year, and they kept climbing.

The game hit its stride in the 1980s after the release of Advanced Dungeons & Dragons, which, between 1977 and 1979, led to three hardcover rulebooks that became indispensable to fans: the Player's Handbook, the Dungeon Master's Guide and the Monster Manual. Further revisions came in 1989, 2000, 2003 and 2008.

To date, Dungeons & Dragons has generated well over $1 billion and has been played by over 20 million people. And that number's expected to rise this year thanks to the release of a new set of rules for the game -- the first update in six years.

The new take, called Tyranny of Dragons, is due this summer and will give players a chance to battle Tiamat, the five-headed queen of the dragons. A constant villain in D&D, Tiamat was also a key character in the memorable Dungeons & Dragons cartoon in the early 80s.

D&D's influence on pop-culture has been extensive. It was responsible for seminal video game franchises like Baldur's Gate, Neverwinter Nights and Pool of Radiance, and it influenced countless others, from Ultima to The Elder Scrolls. It was supported by a pair of magazines ("Dungeon" and "Dragon," naturally), and it's produced tens of millions of dollars in sales of dice, miniature characters, and other game tie-ins.

Like any new form of entertainment, it had its critics, too. With Satanic-looking demons, illustrations of topless female monsters, and a heavy emphasis on spellcasting, the game was opposed by some Christian groups, who claimed it was leading youth down the wrong path. Much like today's critics of video games, opponents sought to blame the game for many societal problems.

Psychologists later stepped in to disprove these theories, but not before Tom Hanks -- yes, THE Tom Hanks -- landed his first major lead role in the so-bad-it’s-good TV movie, “Mazes and Monsters,” in which he plays a geek who can’t separate reality from fantasy:

Despite the protests -- and the social stigma some people attached to playing D&D -- the game has continued to draw a huge audience, including celebrity fans like Mike Meyers, Vin Diesel, Stephen Colbert and even Dame Judy Dench. Actor/director Jon Favreau says the game gave him "a really strong background in imagination, storytelling, understanding how to create tone and a sense of balance."

Us too, Jon. In honor of the game's birthday, we're digging out our d20s and crumpled character sheets for a good, old-fashioned run through "The Keep on the Borderlands." How about you? Share some of your favorite D&D memories in the comments!

For game news, free codes and more, Like us on Facebook and follow @yahoogames on Twitter!

  • Arts & Entertainment
  • Media

whily/yalo · GitHub

Computers for Cynics 2 - It All Went Wrong at Xerox PARC - YouTube

$
0
0

Comments:"Computers for Cynics 2 - It All Went Wrong at Xerox PARC - YouTube"

URL:https://www.youtube.com/watch?v=c6SUOeAqOjU


Computers for Cynics 2 - It All Went Wrong at Xerox PARC

Like

Sign in with your Google Account (YouTube, Google+, Gmail, Orkut, Picasa, or Chrome) to like TheTedNelson's video.

Sign in with your Google Account (YouTube, Google+, Gmail, Orkut, Picasa, or Chrome) to dislike TheTedNelson's video.

Published on May 28, 2012

Ted Nelson continues to cast doubt on Computer Basics.

Show more

Show less

Loading...

Loading...

Loading...

The interactive transcript could not be loaded.

Loading...

Loading...

Ratings have been disabled for this video.

Rating is available when the video has been rented.

This feature is not available right now. Please try again later.

Loading...

Viewing all 9433 articles
Browse latest View live