Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Big changes to Erlang

$
0
0

Comments:"Big changes to Erlang"

URL:http://joearms.github.io/2014/02/01/big-changes-to-erlang.html


Yesterday release candidate 1 of version R17 of Erlang was released.

This was a major event. Version R17 has some changes to Erlang that significantly improve the language. These are the biggest changes since the introduction of higher order functions and list comprehensions.

Erlang now has maps and named arguments in funs.

We’ve been talking about maps for over twelve years, but now they are here to stay.

Why the long wait? - we wanted maps to be a replacement for records and to be as efficient as records, and its not blindingly obvious how to do so.

In the remainder of this article I’ll explain some of the new features in Erlang version R17.

Records are dead - long live maps !

Maps are associative collections of key-value pairs. In Perl and Ruby they are called hashes, in C++ and Java the are called maps in Lua tables, and Python calls them dictionaries. We call them maps.

We can create a new map by writing:

A = #{key1 => Val1, key2 => Val2, ...}

And we can pattern match a map by writing :

#{key1 := Pattern1, key2 := Pattern2, ...} = VarContainingAMap

Updating a map is done as follows:

 NewX = X#{ key1 => Val1, ... KeyN := ValN, ...}

The update operators are either “=>” or “:=” but what they do is subtly different.

Key => Val Introduces a NEW key

This should be used whenever you know that you want to add a new key to a map or when you are unsure if the key exists in the old map or not.

Key := Val Updates an existing Key

The key must be present in the old map. If Key is not in the old map Erlang will complain loudly.

Using two operations instead of one has several advantages:

A spelling mistake cannot accidentally introduce a new key

Suppose we want to update a key called age, and suppose that there were was only one operator “:” that both updated an old element in the map or created a new element in the map. In this case we might write:

birthday(#{age : N} = Person) ->
 Person#{arg : N+1}

So calling birthday(#{person:bill, age:12}) would return #{person:bill, age:12, arg:13}

A new element (arg) has been introduced into the map, but it has the wrong key, and it will live in the map for a long time. We meant to write age but by mistake we wrote arg. This is how things work in Javascript, a simple misspelling of a tag name will create a bad element in an object, and the consequences of this probably won’t be discovered until a lot later.

This is why Erlang has two operators, Key => Val always adds a new element to the map, but Key := Val updates an existing element and shrieks with protest and immediately crashes the program if you try to update an element that does not exist in the map.

Different maps can share key descriptors

Updating existing keys in the map has another unexpected consequence. If the compiler sees a line of code like this:

 X1 = X#{key1 := Val1, key2 := Val2, ...}

When all the update operators are “:=” then it know that the new object X1 has the same keys as X. This means we can store the a descriptor of X (ie what the keys the object has) and the values are in different places, and that the new object X1 can share the keys used by X.

If we now build a large list of objects, all with the same set of keys, then they can share a single key descriptor. This means that asymptotically maps will be as efficient in terms of storage as records.

Credit for this idea comes from Richard O'Keefe who pointed this out years ago. I don’t think any other programming language does this, but I might be wrong.

What are the keys in maps?

The keys in maps can be any ground term - which is great. We argued over this for years.

Here’s an example:

> Z = #{ {age,fred} => 12, {age, bill} => 97, 
 {color, red} => {rgb,255,0,0}}.

Then I can pattern match out any of the arguments like this:

> #{ {color,red} := X1} = Z.

which binds X1 to {rgb,255,0,0}

So now we have maps - the keys can be any ground term. We can handle large lists of maps that all have the same keys in a space efficient manner and we can guard against accidentally introducing bad keys into the map in a sensible way that will not cause problems in the future.

In Erlang you can’t use a name inside a fun before the name has been defined. This makes it rather difficult to define things like factorial in a fun.

Why is this?

You might think that the natural way to define factorial inside a fun would be to say:

 Fac = fun(0) -> 1; (N) -> N*Fac(N-1) end

The problem here is that inside the fun (ie between the fun andend symbols) the variable Fac has not yet been defined, it’s only defined outside the scope of the begin .. end construct.

There is a way round this, we add an additional argument to the internal function that contains the name of the function to be called so we write the inner part of the factorial function as

 fun(F, 0) -> 1;
 (F, N) -> N*F(F, N-1)
 end.

and pass the function to be called as an additonal argument to the function. Now everything is defined. If we say:

 G = fun(F, 0) -> 1;
 (F, N) -> N*F(F, N-1)
 end.

Then G(G,X) will computer factorial X.

But we want to hide this horrible function G, so we write:

 Fac = fun(X) ->
 G = fun(_, 0) -> 1;
 (Fac, N) -> N*Fac(Fac, N-1)
 end,
 G(G, X)
 end.

After this feat of intellectual masturbation is over you have defined the factorial function. We can even type this monstrously horrible expression into the shell and test it:

1> F = fun(X) ->
1> G = fun(_, 0) -> 1;
1> (Fac, N) -> N*Fac(Fac, N-1)
1> end,
1> G(G, X)
1> end.
#Fun
2> F(10).
3628800

And goodness gracious - it works.

This trick is well known to old-style functional programmers, they waffle on about Y combinators and eat this stuff for breakfast, but it’s the kind of stuff that gives functional programming a bad name. Try explaining this to first year students who had a heavy night out at the pub the evening before.

But there is an easier way. Allow names in the function definition before they are fully defined.

So now we have a better way .. and there should be a drum role here.

Here’s the new way (in the shell).

 1> F = fun Fact(0) -> 1; 
 Fact(N) -> N * Fact(N - 1) 
 end. §
 #Fun
 2> F(10).
 3628800

Which is a zillion times better than the old way.


School ditches rules and loses bullies - National News | TVNZ

$
0
0

Comments:"School ditches rules and loses bullies - National News | TVNZ"

URL:http://tvnz.co.nz/national-news/school-ditches-rules-and-loses-bullies-5807957


Ripping up the playground rulebook is having incredible effects on children at an Auckland school.

Chaos may reign at Swanson Primary School with children climbing trees, riding skateboards and playing bullrush during playtime, but surprisingly the students don't cause bedlam, the principal says.

The school is actually seeing a drop in bullying, serious injuries and vandalism, while concentration levels in class are increasing.

Principal Bruce McLachlan rid the school of playtime rules as part of a successful university experiment.

"We want kids to be safe and to look after them, but we end up wrapping them in cotton wool when in fact they should be able to fall over."

Letting children test themselves on a scooter during playtime could make them more aware of the dangers when getting behind the wheel of a car in high school, he said.

"When you look at our playground it looks chaotic. From an adult's perspective, it looks like kids might get hurt, but they don't."

Swanson School signed up to the study by AUT and Otago University just over two years ago, with the aim of encouraging active play.

However, the school took the experiment a step further by abandoning the rules completely, much to the horror of some teachers at the time, he said.

When the university study wrapped up at the end of last year the school and researchers were amazed by the results.

Mudslides, skateboarding, bullrush and tree climbing kept the children so occupied the school no longer needed a timeout area or as many teachers on patrol.

Instead of a playground, children used their imagination to play in a "loose parts pit" which contained junk such as wood, tyres and an old fire hose.

"The kids were motivated, busy and engaged. In my experience, the time children get into trouble is when they are not busy, motivated and engaged. It's during that time they bully other kids, graffiti or wreck things around the school."

Parents were happy too because their children were happy, he said.

But this wasn't a playtime revolution, it was just a return to the days before health and safety policies came to rule.

AUT professor of public health Grant Schofield, who worked on the research project, said there are too many rules in modern playgrounds.

"The great paradox of cotton-woolling children is it's more dangerous in the long-run."

Society's obsession with protecting children ignores the benefits of risk-taking, he said.

Children develop the frontal lobe of their brain when taking risks, meaning they work out consequences. "You can't teach them that. They have to learn risk on their own terms. It doesn't develop by watching TV, they have to get out there."

The research project morphed into something bigger when plans to upgrade playgrounds were stopped due to over-zealous safety regulations and costly play equipment.

"There was so many ridiculous health and safety regulations and the kids thought the static structures of playgrounds were boring."

When researchers - inspired by their own risk-taking childhoods - decided to give children the freedom to create their own play, principals shook their heads but eventually four Dunedin schools and four West Auckland schools agreed to take on the challenge, including Swanson Primary School.

It was expected the children would be more active, but researchers were amazed by all the behavioural pay-offs. The final results of the study will be collated this year.

Schofield urged other schools to embrace risk-taking. "It's a no brainer. As far as implementation, it's a zero-cost game in most cases. All you are doing is abandoning rules," he said.

(Thanks for your interest in this story. Due to the huge number of comments we are unable to continue moderating them.)

Copyright © 2014, Television New Zealand Limited. Breaking and Daily News, Sport & Weather | TV ONE, TV2 | Ondemand

How In-app Purchases Has Destroyed The Industry (by @baekdal) #opinion

$
0
0

Comments:"How In-app Purchases Has Destroyed The Industry (by @baekdal) #opinion"

URL:http://www.baekdal.com/opinion/how-inapp-purchases-has-destroyed-the-industry/


I don't like writing negative articles that don't include a solution to the problem, but in this case, there is no solution. The state of in-app purchases has now reached a level where we have completely lost it. Not only has the gaming industry shot itself in the foot, hacked off their other foot, and lost both its arms ... but it's still engaging in a strategy that will only damage it further.

Why are these gaming studios so intent of killing themselves?

Note: Image from the wonderful 'Black Knight' by Monty Python.

We have reached a point in which mobile games couldn't even be said to be a game anymore. Playing a game means that you have fun. It doesn't mean that you sit around and wait for the game to annoy you for so long that you decide to pay credits to speed it up. And for an old geezer like me who remember the glory days of gaming back in the 1990s, it's just unbearable to watch.

With the help of NerdCubed (great guy), let me illustrate just how bad in-app purchases in games have become. Let's compare a game from the 1990s with the same game on the iPad today.

I think you will be as shocked as I am.

Dungeon Keeper, 1997

Back in the 1990s, one of the best game developers in the world was a company called Bullfrog Productions. It created some of the best strategy games. You might remember games like Syndicate, Theme Hospital and Dungeon Keeper.

Obviously, the games' graphics look horrible today, considering that the high-res screen resolution back then was only 640x480. Your iPad is running at 2048x1536.

But let's look at one of these games: Dungeon Keeper.

It's a game that you can buy today (full game, everything unlocked + all expansion packs included) for $5.99 over at GOG. It takes place underground. Your task is to build up your 'dungeon' by digging out rooms and battling other dungeon keeps around you.

Below is a (long) video from NerdCubed of what it is like to play. You don't have to see the whole video, just go to 2:20 and watch how fast and easy it is to build a room.

He is building a treasure room to store all his gold and jewels. It's 45 squares big, and it takes the game about 2 minutes to complete. It's fast and it's fun. You feel the energy and the rhythm of the game.

So this is the old days:

  • Full game price + expansion packs included: $5.99
  • Build a large treasure room of 45 squares in just 2 minutes.

Just remember that.

Dungeon Keeper for iPad, 2014

Now let's look at exactly the same game, but reimagined for the iPad and in-app purchases.

And once again, NerdCubed reviews it. The review below is 8 minutes long, but you have to watch it all the way to the end. It perfectly illustrates just how mind bogglingly bad things has gone.

The modern-day Dungeon Keeper is not even a game. It's just a socially engineered scam. And since people don't remember what real gaming was like in the 90s, they are giving it the highest rating in the app store.

It's just unbelievable.

Two things before you watch it:

Remember the original game play. The total price: $5.99, and it only takes 2 minutes to dig out 45 squares.Also, Nerdcubed does have a rather liberal use of words, and you probably shouldn't listen to it in the office.

But you *must* watch this because most people don't know how bad things are today.

What EA has done here has nothing to do with gaming, and the same is true for pretty much all other 'free-to-play + in-app purchase' games. We don't have a mobile gaming industry anymore. We have a mobile scamming industry.

There is no game here. And you know what the worst part of this is? Let me show you.

This crap is featured as one of the five top picks on the front page of Apple's app store, as an Editors' choice.

How absolutely f*cked up is that? (...sorry for my choice of words, but this is one of those times where profanity is justified).

As NerdCubed said in his review, the problem is that all the future generations of gamers are going to experience this as the default. They are going to grow up in a world, in which people actually think this is what gaming is like. That social engineering and scamming people is an acceptable way of doing business.

It's not!

It's a scam... done by sick people who have nothing left in their lives other than selfish greed. They should be thrown in jail for deceptive business tactics, and not featured in the app store as an Editors' Choice.

This is outrageous.

You and I have a responsibility to speak out against it. This is not what we want our future to be like. This is completely unacceptable behavior.

As I started out saying, I wish I could end this article on a positive note and a constructive solution. But there is no solution here. There is no way to fix this other than stop doing it.

So for you mobile game developers I simply ask this of you. Walk into the closest bathroom and look yourself in the mirror. Do you like the person that you see? Is this what you want to be remembered by? Or do you want to be remembered as a game developer like Bullfrog, whom everyone loved because they made real games worth playing?

And if you are one of the very few developers who are still making real games (without the in-app purchase crap), I salute you. You are my heroes.

Games like Oceanhorn, Minecraft, XCOM and others. You guys are freaking brilliant, and I absolutely love you for having the courage of sticking to your principles.

Also read: "Optimizing Your Industry to the Point of Suicide", in which I show you how it will cost you almost $3,500 to unlock everything in Asphalt 7.

NSA and GCHQ spoofed LinkedIn to hack Belgian cryptography professor — Tech News and Analysis

$
0
0

Comments:" NSA and GCHQ spoofed LinkedIn to hack Belgian cryptography professor — Tech News and Analysis "

URL:http://gigaom.com/2014/02/01/nsa-and-gchq-hacked-belgian-cryptographer-report/


Belgium’s federal prosecutor is looking into the likely hacking of noted cryptographer Jean-Jacques Quisquater by the NSA and its British counterpart GCHQ, as first reported on Saturday morning by De Standaard.

Quisquater’s targeting became apparent during the investigation into the hacking of telecoms firm Belgacom, shown by Edward Snowden’s leaks to be the work of GCHQ.

As in that case, the Université catholique de Louvain professor apparently fell victim to a “quantum insert” trick that duped him into thinking he was visiting LinkedIn to respond to an emailed “request” when he was actually visiting a malware-laden copy of a LinkedIn page.

“The Belgian federal police (FCCU) sent me a warning about this attack and did the analysis,” Quisquater told me by email. As for the purpose of the hack: “We don’t know. There are many hypotheses (about 12 or 15) but it is certainly an industrial espionage plus a surveillance of people working about civilian cryptography.”

Quisquater, who holds 17 patents and is particularly noted for his work on payment security, also said the attack was “related to a variant” of MiniDuke, an exploit that quietly puts backdoors into the target’s system.

Whatever the precise motive, on the face of it Quisquater is very much a civilian target — a professor emeritus, not a spy, a terrorist nor a member of government. It would be difficult for any intelligence agency to claim that stealing information from him is a matter of crucial national interest. The aftermath of this revelation will be worth watching.

This article was updated at 9am PT to include Quisquater’s quotes and again at 9.50am PT to include comment.

Microsoft please clean your store from junk | One Slash

$
0
0

Comments:"Microsoft please clean your store from junk | One Slash"

URL:http://oneslash.postach.io/microsoft-please-clean-your-store-from-junk


Posted on February 1st, 2014

Dear Microsoft,
I really like your Windows Phone platform, and use Nokia Lumia 820. I am generally happy with performance and functionality. Your Windows Phone store has several exclusive apps with very interesting ideas like Foundbite and Bill Reminder.

BUT, your store has a lot of junk which makes new and unexperienced users to buy and run fake applications.

I reported several times applications which is just fake, disappointing, or PAID apps which is just a links to a web browser.

Let me show you what I will get if I will enter just a Facebook.

I will get tons of Facebook apps, and only one of this apps s official other are junk. Let me show screenshots of one,

The Facebook PRO which costs $2.49, and as you can see its just a browser wrapper.

There is one more problem which makes me really mad at you dear Microsoft, and it is FAKE apps.

THE CHROME browser,

3 Firefox browsers

Please Microsoft, remove fake and useless apps.

PS. I reported all this apps several times to Microsoft, but no action is taken.

The Only Interview Question That Matters | Inc.com

$
0
0

Comments:"The Only Interview Question That Matters | Inc.com"

URL:http://www.inc.com/lou-adler/best-interview-question-ever.html


Last week, LinkedIn announced to the world that I've been in the recruiting industry for 36 years. During that time, I've written a number of books about talent challenges and opportunities, but one thing continues to surprise me: More than 90 percent of hiring managers think they're good interviewers, yet rarely do they reach unanimous hiring decisions with other 90 percenters in the same room evaluating the same candidate.

This realization led me on a quest to find the one interview question that would yield universal agreement from hiring managers. In took 10 years of trial and error, but I eventually found it. Here's it is:

What single project or task would you consider your most significant accomplishment in your career to date?

To see why this simple question is so powerful, imagine you're the candidate and I've just asked you this question. What accomplishment would you select?

Then imagine that over the course of the next 15-20 minutes I asked you the following follow-up questions. How would you respond?

  • Can you give me a detailed overview of the accomplishment?
  • Tell me about the company, your title, your position, your role, and the team involved.
  • What were the actual results achieved?
  • When did it take place and how long did the project take?
  • Why were you chosen?
  • What were the 3-4 biggest challenges you faced and how did you deal with them?
  • Where did you go the extra mile or take the initiative?
  • Walk me through the plan, how you managed it, and its measured success.
  • Describe the environment and resources.
  • Explain your manager's style and whether you liked it.
  • What were the technical skills needed to accomplish the objective and how were they used?
  • What were some of the biggest mistakes you made?
  • What aspects of the project did you truly enjoy?
  • What aspects did you not especially care about and how did you handle them?
  • Give examples of how you managed and influenced others.
  • How did you change and grow as a person?
  • What you would do differently if you could do it again?
  • What type of formal recognition did your receive?

With an accomplishment big enough, and answers detailed enough to fill 20 minutes, this one line of questioning can tell an interviewer everything he or she needs to know about a candidate. The insight gained is remarkable. But the real secret ingredient is not the question; that's just a setup. The most important elements are the details underlying the accomplishment. This is what real interviewing is about -- delving into the details.

Don't spend time asking clever interview questions; instead, spend time learning to get the answer to just this one question. Then ask it again and begin to connect the dots. After you hire a few people this way, you'll also call it the most important interview question of all time.

IMAGE: Jared Cherup/Flickr

Last updated: Feb 1, 2014


LOU ADLER is the CEO of The Adler Group, a consulting firm helping companies implement performance-based hiring. His latest book, The Essential Guide for Hiring & Getting Hired (Workbench, 2013), covers the performance-based process described in this article in more depth. Lou is one of LinkedIn’s top 20 Influencers, has appeared on Fox News and is frequently quoted in Business Insider, The Wall Street Journal and recruiting industry trade publications around the world.
@LouA

Video conversations with up to 8 people for free. No login – no installs

Why Bloom filters work the way they do | DDI

$
0
0

Comments:" Why Bloom filters work the way they do | DDI"

URL:http://www.michaelnielsen.org/ddi/why-bloom-filters-work-the-way-they-do/


Imagine you’re a programmer who is developing a new web browser. There are many malicious sites on the web, and you want your browser to warn users when they attempt to access dangerous sites. For example, suppose the user attempts to access http://domain/etc. You’d like a way of checking whether domain is known to be a malicious site. What’s a good way of doing this?

An obvious naive way is for your browser to maintain a list or set data structure containing all known malicious domains. A problem with this approach is that it may consume a considerable amount of memory. If you know of a million malicious domains, and domains need (say) an average of 20 bytes to store, then you need 20 megabytes of storage. That’s quite an overhead for a single feature in your web browser. Is there a better way?

In this post I’ll describe a data structure which provides an excellent way of solving this kind of problem. The data structure is known as a Bloom filter. Bloom filter are much more memory efficient than the naive “store-everything” approach, while remaining extremely fast. I’ll describe both how Bloom filters work, and also some extensions of Bloom filters to solve more general problems.

Most explanations of Bloom filters cut to the chase, quickly explaining the detailed mechanics of how Bloom filters work. Such explanations are informative, but I must admit that they made me uncomfortable when I was first learning about Bloom filters. In particular, I didn’t feel that they helped me understand why Bloom filters are put together the way they are. I couldn’t fathom the mindset that would lead someone to invent such a data structure. And that left me feeling that all I had was a superficial, surface-level understanding of Bloom filters.

In this post I take an unusual approach to explaining Bloom filters. We won’t begin with a full-blown explanation. Instead, I’ll gradually build up to the full data structure in stages. My goal is to tell a plausible story explaining how one could invent Bloom filters from scratch, with each step along the way more or less “obvious”. Of course, hindsight is 20-20, and such a story shouldn’t be taken too literally. Rather, the benefit of developing Bloom filters in this way is that it will deepen our understanding of why Bloom filters work in just the way they do. We’ll explore some alternative directions that plausibly could have been taken – and see why they don’t work as well as Bloom filters ultimately turn out to work. At the end we’ll understand much better why Bloom filters are constructed the way they are.

Of course, this means that if your goal is just to understand the mechanics of Bloom filters, then this post isn’t for you. Instead, I’d suggest looking at a more conventional introduction – the Wikipedia article, for example, perhaps in conjunction with an interactive demo, like the nice one here. But if your goal is to understand why Bloom filters work the way they do, then you may enjoy the post.

A stylistic note: Most of my posts are code-oriented. This post is much more focused on mathematical analysis and algebraic manipulation: the point isn’t code, but rather how one could come to invent a particular data structure. That is, it’s the story behind the code that implements Bloom filters, and as such it requires rather more attention to mathematical detail.

General description of the problem: Let’s begin by abstracting away from the “safe web browsing” problem that began this post. We want a data structure which represents a set of objects. That data structure should enable two operations: (1) the ability to add an extra object to the set; and (2) a test to determine whether a given object is a member of . Of course, there are many other operations we might imagine wanting – for example, maybe we’d also like to be able to delete objects from the set. But we’re going to start with just these two operations of adding and testing. Later we’ll come back and ask whether operations such as deleteing objects are also possible.

Idea: store a set of hashed objects: Okay, so how can we solve the problem of representing in a way that’s more memory efficient than just storing all the objects in ? One idea is to store hashed versions of the objects in , instead of the full objects. If the hash function is well chosen, then the hashed objects will take up much less memory, but there will be little danger of making errors when testing whether an object is an element of the set or not.

Let’s be a little more explicit about how this would work. We have a set of objects , where denotes the number of objects in . For each object we compute an -bit hash function – i.e., a hash function which takes an arbitrary object as input, and outputs bits – and the set is represented by the set . We can test whether is an element of by checking whether is in the set of hashes. This basic hashing approach requires roughly bits of memory.

(As an aside, in principle it’s possible to store the set of hashed objects more efficiently, using just bits, where is to base two. The saving is possible because the ordering of the objects in a set is redundant information, and so in principle can be eliminated using a suitable encoding. However, I haven’t thought through what encodings could be used to do this in practice. In any case, the saving is likely to be minimal, since and will usually be quite a bit bigger than – if that weren’t the case, then hash collisions would occur all the time. So I’ll ignore the terms for the rest of this post. In fact, in general I’ll be pretty cavalier in later analyses as well, omitting lower order terms without comment.)

A danger with this hash-based approach is that an object outside the set might have the same hash value as an object inside the set, i.e., for some . In this case, test will erroneously report that is in . That is, this data structure will give us a false positive. Fortunately, by choosing a suitable value for , the number of bits output by the hash function, we can reduce the probability of a false positive as much as we want. To understand how this works, notice first that the probability of test giving a false positive is 1 minus the probability of test correctly reporting that is not in . This occurs when for all . If the hash function is well chosen, then the probability that is for each , and these are independent events. Thus the probability of test failing is:

This expression involves three quantities: the probability of test giving a false positive, the number of bits output by the hash function, and the number of elements in the set, . It’s a nice expression, but it’s more enlightening when rewritten in a slightly different form. What we’d really like to understand is how many bits of memory are needed to represent a set of size , with probability of a test failing. To understand that we let be the number of bits of memory used, and aim to express as a function of and . Observe that , and so we can substitute for to obtain

This can be rearranged to express in term of and :

This expression answers the question we really want answered, telling us how many bits are required to store a set of size with a probability of a test failing. Of course, in practice we’d like to be small – say – and when this is the case the expression may be approximated by a more transparent expression:

This makes intuitive sense: test failure occurs when is not in , but is in the hashed version of . Because this happens with probability , it must be that occupies a fraction of the total space of possible hash outputs. And so the size of the space of all possible hash outputs must be about . As a consequence we need bits to represent each hashed object, in agreement with the expression above.

How memory efficient is this hash-based approach to representing ? It’s obviously likely to be quite a bit better than storing full representations of the objects in . But we’ll see later that Bloom filters can be far more memory efficient still.

The big drawback of this hash-based approach is the false positives. Still, for many applications it’s fine to have a small probability of a false positive. For example, false positives turn out to be okay for the safe web browsing problem. You might worry that false positives would cause some safe sites to erroneously be reported as unsafe, but the browser can avoid this by maintaining a (small!) list of safe sites which are false positives for test.

Idea: use a bit array: Suppose we want to represent some subset of the integers . As an alternative to hashing or to storing directly, we could represent using an array of bits, numbered through . We would set bits in the array to if the corresponding number is in , and otherwise set them to . It’s obviously trivial to add objects to , and to test whether a particular object is in or not.

The memory cost to store in this bit-array approach is bits, regardless of how big or small is. Suppose, for comparison, that we stored directly as a list of 32-bit integers. Then the cost would be bits. When is very small, this approach would be more memory efficient than using a bit array. But as gets larger, storing directly becomes much less memory efficient. We could ameliorate this somewhat by storing elements of using only 10 bits, instead of 32 bits. But even if we did this, it would still be more expensive to store the list once got beyond one hundred elements. So a bit array really would be better for modestly large subsets.

Idea: use a bit array where the indices are given by hashes: A problem with the bit array example described above is that we needed a way of numbering the possible elements of , . In general the elements of may be complicated objects, not numbers in a small, well-defined range.

Fortunately, we can use hashing to number the elements of . Suppose is an -bit hash function. We’re going to represent a set using a bit array containing elements. In particular, for each we set the th element in the bit array, where we regard as a number in the range . More explicitly, we can add an element to the set by setting bit number in the bit array. And we can test whether is an element of by checking whether bit number in the bit array is set.

This is a good scheme, but the test can fail to give the correct result, which occurs whenever is not an element of , yet for some . This is exactly the same failure condition as for the basic hashing scheme we described earlier. By exactly the same reasoning as used then, the failure probability is

As we did earlier, we’d like to re-express this in terms of the number of bits of memory used, . This works differently than for the basic hashing scheme, since the number of bits of memory consumed by the current approach is , as compared to for the earlier scheme. Using and substituting for in Equation [*], we have:

Rearranging this to express in term of and we obtain:

When is small this can be approximated by

This isn’t very memory efficient! We’d like the probability of failure to be small, and that makes the dependence bad news when compared to the dependence of the basic hashing scheme described earlier. The only time the current approach is better is when is very, very large. To get some idea for just how large, if we want , then is only better than when gets to be more than about . That’s quite a set! In practice, the basic hashing scheme will be much more memory efficient.

Intuitively, it’s not hard to see why this approach is so memory inefficient compared to the basic hashing scheme. The problem is that with an -bit hash function, the basic hashing scheme used bits of memory, while hashing into a bit array uses bits, but doesn’t change the probability of failure. That’s exponentially more memory!

At this point, hashing into bit arrays looks like a bad idea. But it turns out that by tweaking the idea just a little we can improve it a lot. To carry out this tweaking, it helps to name the data structure we’ve just described (where we hash into a bit array). We’ll call it a filter, anticipating the fact that it’s a precursor to the Bloom filter. I don’t know whether “filter” is a standard name, but in any case it’ll be a useful working name.

Idea: use multiple filters: How can we make the basic filter just described more memory efficient? One possibility is to try using multiple filters, based on independent hash functions. More precisely, the idea is to use filters, each based on an (independent) -bit hash function, . So our data structure will consist of separate bit arrays, each containing bits, for a grand total of bits. We can add an element by setting the th bit in the first bit array (i.e., the first filter), the th bit in the second filter, and so on. We can test whether a candidate element is in the set by simply checking whether all the appropriate bits are set in each filter. For this to fail, each individual filter must fail. Because the hash functions are independent of one another, the probability of this is the th power of any single filter failing:

The number of bits of memory used by this data structure is and so we can substitute and rearrange to get

Provided is much smaller than , this expression can be simplified to give

Good news! This repetition strategy is much more memory efficient than a single filter, at least for small values of . For instance, moving from repetitions to repititions changes the denominator from to – typically, a huge improvement, since is very small. And the only price paid is doubling the numerator. So this is a big win.

Intuitively, and in retrospect, this result is not so surprising. Putting multiple filters in a row, the probability of error drops exponentially with the number of filters. By contrast, in the single filter scheme, the drop in the probability of error is roughly linear with the number of bits. (This follows from considering Equation [*] in the limit where is small.) So using multiple filters is a good strategy.

Of course, a caveat to the last paragraph is that this analysis requires that , which means that can’t be too large before the analysis breaks down. For larger values of the analysis is somewhat more complicated. In order to find the optimal value of we’d need to figure out what value of minimizes the exact expression [**] for . We won’t bother – at best it’d be tedious, and, as we’ll see shortly, there is in any case a better approach.

Overlapping filters: This is a variation on the idea of repeating filters. Instead of having separate bit arrays, we use just a single array of bits. When adding an object , we simply set all the bits in the same bit array. To test whether an element is in the set, we simply check whether all the bits are set or not.

What’s the probability of the test failing? Suppose . Failure occurs when for some and , and also for some and , and so on for all the remaining hash functions, . These are independent events, and so the probability they all occur is just the product of the probabilities of the individual events. A little thought should convince you that each individual event will have the same probability, and so we can just focus on computing the probability that for some and . The overall probability of failure will then be the th power of that probability, i.e.,

The probability that for some and is one minus the probability that for all and . These are independent events for the different possible values of and , each with probability , and so

since there are different pairs of possible values . It follows that

Substituting we obtain

which can be rearranged to obtain

This is remarkably similar to the expression [**] derived above for repeating filters. In fact, provided is much smaller than , we get

which is exactly the same as [**] when is small. So this approach gives quite similar outcomes to the repeating filter strategy.

Which approach is better, repeating or overlapping filters? In fact, it can be shown that

and so the overlapping filter strategy is more memory efficient than the repeating filter strategy. I won’t prove the inequality here – it’s a straightforward (albeit tedious) exercise in calculus. The important takeaway is that overlapping filters are the more memory-efficient approach.

How do overlapping filters compare to our first approach, the basic hashing strategy? I’ll defer a full answer until later, but we can get some insight by choosing and . Then for the overlapping filter we get , while the basic hashing strategy gives . Basic hashing is worse whenever is more than about 100 million – a big number, but also a big improvement over the required by a single filter. Given that we haven’t yet made any attempt to optimize , this ought to encourage us that we’re onto something.

Problems for the author

  • I suspect that there’s a simple intuitive argument that would let us see upfront that overlapping filters will be more memory efficient than repeating filters. Can I find such an argument?

Bloom filters: We’re finally ready for Bloom filters. In fact, Bloom filters involve only a few small changes to overlapping filters. In describing overlapping filters we hashed into a bit array containing bits. We could, instead, have used hash functions with a range and hashed into a bit array of (instead of ) bits. The analysis goes through unchanged if we do this, and we end up with

and

exactly as before. The only reason I didn’t do this earlier is because in deriving Equation [*] above it was convenient to re-use the reasoning from the basic hashing scheme, where (not ) was the convenient parameter to use. But the exact same reasoning works.

What’s the best value of to choose? Put another way, what value of should we choose in order to minimize the number of bits, , given a particular value for the probability of error, , and a particular sizek ? Equivalently, what value of will minimize , given and ? I won’t go through the full analysis here, but with calculus and some algebra you can show that choosing

minimizes the probability . (Note that denotes the natural logarithm, not logarithms to base 2.) By choosing in this way we get:

This really is good news! Not only is it better than a bit array, it’s actually (usually) much better than the basic hashing scheme we began with. In particular, it will be better whenever

which is equivalent to requiring

If we want (say) this means that Bloom filter will be better whenever , which is obviously an extremely modest set size.

Another way of interpreting [***] is that a Bloom filter requires bits per element of the set being represented. In fact, it’s possible to prove that any data structure supporting the add and test operations will require at least bits per element in the set. This means that Bloom filters are near-optimal. Futher work has been done finding even more memory-efficient data structures that actually meet the bound. See, for example, the paper by Anna Pagh, Rasmus Pagh, and S. Srinivasa Rao.

Problems for the author

  • Are the more memory-efficient algorithms practical? Should we be using them?

In actual applications of Bloom filters, we won’t know in advance, nor . So the way we usually specify a Bloom filter is to specify the maximum size of set that we’d like to be able to represent, and the maximal probability of error, , that we’re willing to tolerate. Then we choose

and

This gives us a Bloom filter capable of representing any set up to size , with probability of error guaranteed to be at most . The size is called the capacity of the Bloom filter. Actually, these expressions are slight simplifications, since the terms on the right may not be integers – to be a little more pedantic, we choose

and

One thing that still bugs me about Bloom filters is the expression for the optimal value for . I don’t have a good intuition for it – why is it logarithmic in , and why does it not depend on ? There’s a tradeoff going on here that’s quite strange when you think about it: bit arrays on their own aren’t very good, but if you repeat or overlap them just the right number of times, then performance improves a lot. And so you can think of Bloom filters as a kind of compromise between an overlap strategy and a bit array strategy. But it’s really not at all obvious (a) why choosing a compromise strategy is the best; or (b) why the right point at which to compromise is where it is, i.e., why has the form it does. I can’t quite answer these questions at this point – I can’t see that far through Bloom filters. I suspect that understanding the case really well would help, but haven’t put in the work. Anyone with more insight is welcome to speak up!

Summing up Bloom filters: Let’s collect everything together. Suppose we want a Bloom filter with capacity , i.e., capable of representing any set containing up to elements, and such that test produces a false positive with probability at most . Then we choose

independent hash functions, . Each hash function has a range , where is the number of bits of memory our Bloom filter requires,

We number the bits in our Bloom filter from . To add an element to our set we set the bits in the filter. And to test whether a given element is in the set we simply check whether bits in the bit array are all set.

That’s all there is to the mechanics of how Bloom filters work! I won’t give any sample code – I usually provide code samples in Python, but the Python standard library lacks bit arrays, so nearly all of the code would be concerned with defining a bit array class. That didn’t seem like it’d be terribly illuminating. Of course, it’s not difficult to find libraries implementing Bloom filters. For example, Jason Davies has written a javascript Bloom filter which has a fun and informative online interactive visualisation. I recommend checking it out. And I’ve personally used Mike Axiak‘s fast C-based Python library pybloomfiltermmap– the documentation is clear, it took just a few minutes to get up and running, and I’ve generally had no problems.

Problems

Applications of Bloom filters: Bloom filters have been used to solve many different problems. Here’s just a few examples to give the flavour of how they can be used. An early idea was Manber and Wu’s 1994 proposal to use Bloom filters to store lists of weak passwords. Google’s BigTable storage system uses Bloom filters to speed up queries, by avoiding disk accesses for rows or columns that don’t exist. Google Chrome uses Bloom filters to do safe web browsing– the opening example in this post was quite real! More generally, it’s useful to consider using Bloom filters whenever a large collection of objects needs to be stored. They’re not appropriate for all purposes, but at the least it’s worth thinking about whether or not a Bloom filter can be applied.

Extensions of Bloom filters: There’s many clever ways of extending Bloom filters. I’ll briefly describe one, just to give you the flavour, and provide links to several more.

A delete operation: It’s possible to modify Bloom filters so they support a delete operation that lets you remove an element from the set. You can’t do this with a standard Bloom filter: it would require unsetting one or more of the bits in the bit array. This could easily lead us to accidentally delete other elements in the set as well.

Instead, the delete operation is implemented using an idea known as a counting Bloom filter. The basic idea is to take a standard Bloom filter, and replace each bit in the bit array by a bucket containing several bits (usually 3 or 4 bits). We’re going to treat those buckets as counters, initially set to . We add an element to the counting Bloom filter by incrementing each of the buckets numbered . We test whether is in the counting Bloom filter by looking to see whether each of the corresponding buckets are non-zero. And we delete by decrementing each bucket.

This strategy avoids the accidental deletion problem, because when two elements of the set and hash into the same bucket, the count in that bucket will be at least . deleteing one of the elements, say , will still leave the count for the bucket at least , so won’t be accidentally deleted. Of course, you could worry that this will lead us to erroneously conclude that is still in the set after it’s been deleted. But that can only happen if other elements in the set hash into every single bucket that hashes into. That will only happen if is very large.

Of course, that’s just the basic idea behind counting Bloom filters. A full analysis requires us to understand issues such as bucket overflow (when a counter gets incremented too many times), the optimal size for buckets, the probability of errors, and so on. I won’t get into that, but you there’s details in the further reading, below.

Other variations and further reading: There are many more variations on Bloom filters. Just to give you the flavour of a few applications: (1) they can be modified to be used as lookup dictionaries, associating a value with each element added to the filter; (2) they can be modified so that the capacity scales up dynamically; and (3) they can be used to quickly approximate the number of elements in a set. There are many more variations as well: Bloom filters have turned out to be a very generative idea! This is part of why it’s useful to understand them deeply, since even if a standard Bloom filter can’t solve the particular problem you’re considering, it may be possible to come up with a variation which does. You can get some idea of the scope of the known variations by looking at the Wikipedia article. I also like the survey article by Andrei Broder and Michael Mitzenmacher. It’s a little more dated (2004) than the Wikipedia article, but nicely written and a good introduction. For a shorter introduction to some variations, there’s also a recent blog post by Matthias Vallentin. You can get the flavour of current research by looking at some of the papers citing Bloom filters here. Finally, you may enjoy reading the original paper on Bloom filters, as well as the original paper on counting Bloom filters.

Understanding data structures: I wrote this post because I recently realized that I didn’t understand any complex data structure in any sort of depth. There are, of course, a huge number of striking data structures in computer science – just look at Wikipedia’s amazing list! And while I’m familiar with many of the simpler data structures, I’m ignorant of most complex data structures. There’s nothing wrong with that – unless one is a specialist in data structures there’s no need to master a long laundry list. But what bothered me is that I hadn’t thoroughly mastered even a single complex data structure. In some sense, I didn’t know what it means to understand a complex data structure, at least beyond surface mechanics. By trying to reinvent Bloom filters, I’ve found that I’ve deepened my own understanding and, I hope, written something of interest to others.

Interested in more? Please subscribe to this blog, or follow me on Twitter. You may also enjoy reading my new book about open science, Reinventing Discovery.


OpenHatch - Community tools for free and open source software

China’s Deceptively Weak (and Dangerous) Military | The Diplomat

$
0
0

Comments:"China’s Deceptively Weak (and Dangerous) Military | The Diplomat"

URL:http://thediplomat.com/2014/01/chinas-deceptively-weak-and-dangerous-military/


In many ways, the PLA is weaker than it looks – and more dangerous.

By Ian Easton for The Diplomat

January 31, 2014

In April 2003, the Chinese Navy decided to put a large group of its best submarine talent on the same boat as part of an experiment to synergize its naval elite. The result? Within hours of leaving port, the Type 035 Ming III class submarine sank with all hands lost. Never having fully recovered from this maritime disaster, the People’s Republic of China (PRC) is still the only permanent member of the United Nations Security Council never to have conducted an operational patrol with a nuclear missile submarine.

China is also the only member of the UN’s “Big Five” never to have built and operated an aircraft carrier. While it launched a refurbished Ukrainian built carrier amidst much fanfare in September 2012 – then-President Hu Jintao and all the top brass showed up– soon afterward the big ship had to return to the docks for extensive overhauls because of suspected engine failure; not the most auspicious of starts for China’s fledgling “blue water” navy, and not the least example of a modernizing military that has yet to master last century’s technology.

Indeed, today the People’s Liberation Army (PLA) still conducts long-distance maneuver training at speeds measured by how fast the next available cargo train can transport its tanks and guns forward. And if mobilizing and moving armies around on railway tracks sounds a bit antiquated in an era of global airlift, it should – that was how it was done in the First World War.

Not to be outdone by the conventional army, China’s powerful strategic rocket troops, the Second Artillery Force, still uses cavalry units to patrol its sprawling missile bases deep within China’s vast interior. Why? Because it doesn’t have any helicopters. Equally scarce in China are modern fixed-wing military aircraft. So the Air Force continues to use a 1950s Soviet designed airframe, the Tupolev Tu-16, as a bomber (its original intended mission), a battlefield reconnaissance aircraft, an electronic warfare aircraft, a target spotting aircraft, and an aerial refueling tanker. Likewise, the PLA uses the Soviet designed Antonov An-12 military cargo aircraft for ELINT (electronic intelligence) missions, ASW (anti-submarine warfare) missions, geological survey missions, and airborne early warning missions. It also has an An-12 variant specially modified for transporting livestock, allowing sheep and goats access to remote seasonal pastures.

But if China’s lack of decent hardware is somewhat surprising given all the hype surrounding Beijing’s massive military modernization program, the state of “software” (military training and readiness) is truly astounding. At one military exercise in the summer of 2012, a strategic PLA unit, stressed out by the hard work of handling warheads in an underground bunker complex, actually had to take time out of a 15-day wartime simulation for movie nights and karaoke parties. In fact, by day nine of the exercise, a “cultural performance troupe” (common PLA euphemism for song-and-dance girls) had to be brought into the otherwise sealed facility to entertain the homesick soldiers.

Apparently becoming suspicious that men might not have the emotional fortitude to hack it in high-pressure situations, an experimental all-female unit was then brought in for the 2013 iteration of the war games, held in May, for an abbreviated 72-hour trial run. Unfortunately for the PLA, the results were even worse. By the end of the second day of the exercise, the hardened tunnel facility’s psychological counseling office was overrun with patients, many reportedly too upset to eat and one even suffering with severe nausea because of the unpleasant conditions.

While recent years have witnessed a tremendous Chinese propaganda effort aimed at convincing the world that the PRC is a serious military player that is owed respect, outsiders often forget that China does not even have a professional military. The PLA, unlike the armed forces of the United States, Japan, South Korea, Taiwan and other regional heavyweights, is by definition not a professional fighting force. Rather, it is a “party army,” the armed wing of the Chinese Communist Party (CCP). Indeed, all career officers in the PLA are members of the CCP and all units at the company level and above have political officers assigned to enforce party control. Likewise, all important decisions in the PLA are made by Communist Party committees that are dominated by political officers, not by operators. This system ensures that the interests of the party’s civilian and military leaders are merged, and for this reason new Chinese soldiers entering into the PLA swear their allegiance to the CCP, not to the PRC constitution or the people of China.

This may be one reason why China’s marines (or “naval infantry” in PLA parlance) and other  amphibious warfare units train by landing on big white sandy beaches that look nothing like the west coast of Taiwan (or for that matter anyplace else they could conceivably be sent in the East China Sea or South China Sea). It could also be why PLA Air Force pilots still typically get less than ten hours of flight time a month (well below regional standards), and only in 2012 began to have the ability to submit their own flight plans (previously, overbearing staff officers assigned pilots their flight plans and would not even allow them to taxi and take-off on the runways by themselves).

Intense and realistic training is dangerous business, and the American maxim that the more you bleed during training the less you bleed during combat doesn’t translate well in a Leninist military system. Just the opposite. China’s military is intentionally organized to bureaucratically enforce risk-averse behavior, because an army that spends too much time training is an army that is not engaging in enough political indoctrination. Beijing’s worst nightmare is that the PLA could one day forget that its number one mission is protecting the Communist Party’s civilian leaders against all its enemies – especially when the CCP’s “enemies” are domestic student or religious groups campaigning for democratic rights, as happened in 1989 and 1999, respectively.

absorptions: Mystery signal from a helicopter

$
0
0

Comments:"absorptions: Mystery signal from a helicopter"

URL:http://www.windytan.com/2014/02/mystery-signal-from-helicopter.html


Last night, YouTube suggested a video for me. It was a raw clip from a news helicopter filming a police chase in Kansas City, Missouri. I quickly noticed a weird interference in the audio, especially the left channel, and thought it must be caused by the chopper's engine. I turned up the volume and realized it's not interference at all, but a mysterious digital signal! And off we went again.

(Here, a HTML5 audio element used to be)

The signal sits alone on the left audio channel, so I can completely isolate it. Judging from the spectrogram, the modulation scheme seems to be BFSK, switching the carrier between 1200 and 2200 Hz. I demodulated it by filtering it with a lowpass and highpass sinc in SoX and comparing outputs. Now I had a bitstream at 1200 bps.

The bitstream consists of packets of 47 bytes each, synchronized by start and stop bits and separated by repetitions of the byte 0x80. Most bits stay constant during the video, but three distinct groups of bytes contain varying data, marked blue below:

What could it be? Location telemetry from the helicopter? Information about the camera direction? Video timestamps?

The first guess seems to be correct. It is supported by the relationship of two of the three byte groups. If the 4 first bits of each byte are ignored, the data forms a smooth gradient of three-digit numbers in base-10. When plotted parametrically, they form an intriguing winding curve. It is very similar to this plot of the car's position (blue, yellow) along with viewing angles from the helicopter (green), derived from the video by magical image analysis (only the first few minutes shown):

When the received curve is overlaid with the car's location trace, we see that 100 steps on the curve scale corresponds to exactly 1 minute of arc on the map!

Using this relative information, and the fact that the helicopter circled around the police station in the end, we can plot all the received data points in Google Earth to see the location trace of the helicopter:

Update: Apparently the video downlink to ground was transmitted using a transmitter similar to Nucomm Skymaster TX that is able to send live GPS coordinates. And this is how they seem to do it.

Update 2: Yes, it's Bell 202 ASCII.

Why Dart should learn JSON while it’s still young | Max Horstmann's Coding Blog

$
0
0

Comments:"Why Dart should learn JSON while it’s still young | Max Horstmann's Coding Blog"

URL:http://maxhorstmann.net/2014/01/31/why-dart-should-learn-json-while-its-still-young/


Remember how easy it was to learn your native language as a toddler? Of course you don’t, and that’s the point. Grammar and vocabulary somehow got hard-wired in your brain while you were very young, and pretty much all you had to do was listen and sleep. As we all know, it’s much harder to learn another language later in life. 

One of Google’s youngest babies, the Dart programming language, must have been conceived some time in 2011 and was finally born in November 2013. So, it’s fair to say that it’s still it its infancy.

I like that Dart allows for using the same language and framework for browser development (potentially an SPA powered by AngularDart) and for back-end code. Yes, just like node.js. But without having to deal with Javascript.

Here’s one suggestion though I wanna make to Google’s Dart team: teach your baby Dart to speak JSON, the web’s data-interchange format, while it’s still young. 

“Wait a minute”, you might say, “what hell are you talking about? Dart does already speak JSON. There’s dart:convert, which allows you to decode and encode all day long.”

Yes. But for a language designed as a new platform for scalable web app engineering, JSON shouldn’t come as an afterthought, hidden somewhere in a library. JSON is everywhere on the web, and Dart should speak to you and listen to you in JSON like it’s its native language. The Dart editor should be able to display a JSON serialization of anything the mouse pointer touches. Dart should breathe JSON. Let me give you an example:

A few weeks ago, I found it surprisingly hard to serialize a simple (plain-old) Dart object like

class Customer
{
 int Id;
 String Name;
}

to this (very common) JSON representation

{
 "Id": 17,
 "Name": "John"
}

and ended up posting this question on Stackoverflow. It took me a bit by surprise when Google’s Developer Advocate for Dart, Seth Ladd, replied that

Unfortunately, there’s no universal JSON serialization of objects for all platforms.

Well, that’s true. JSON has been designed as a data interchange format, which doesn’t cover the platform and language specific serialization aspects. In fact, JSON is very lightweight by design. Side-note: I recommend reading RESTful Web APIs to learn more about emerging standards sitting on top of JSON, such as Collection+JSON

However, as said: the web speaks JSON. Almost every RESTful API speaks JSON (well, a few still speak XML). The Twitter, Facebook, and Stack Exchange APIs speak JSON. Therefore, I think the simple serialization format above – even if not part of any formal standard – should become the default serialization of a Dart object.

So, just typing

var json = JSON.encode(customer);

should give me the the simple JSON representation of customer by default. As pointed out in the Stackoverflow question, I should not be required to explicitly implement toJson() or use a third-party library such as Alexander Tkachev exportable package (although I have to admit it’s pretty neat).

There’s one more thing – DateTime. Unfortunately, ECMA-404 doesn’t standardize how to serialize dates. ISO 8601, however, does. Dart’s DateTime class already “complies with a subset of ISO 8601″ via its parse method..

So, here’s another thing I wanna add to my suggestion: consider adapting ISO 8601 for a default JSON serialization of DateTime, so the following just works:

var dt = DateTime.parse("2014-01-26T11:38:17");
print(JSON.encode(dt));

As of Dart 1.1.1, this will throw:

Unhandled exception:
Converting object to an encodable object failed.
#0 _JsonStringifier.stringifyValue (dart:convert/json.dart:416)
#1 _JsonStringifier.stringify (dart:convert/json.dart:336)
#2 JsonEncoder.convert (dart:convert/json.dart:177)
#3 JsonCodec.encode (dart:convert/json.dart:106)

Discuss on HN.

Like this:

LikeLoading...

Voice Chat API — free audio conference web service powered by WebRTC.

Strangest language feature - Stack Overflow

$
0
0

Comments:"Strangest language feature - Stack Overflow"

URL:http://stackoverflow.com/questions/1995113/strangest-language-feature/2002154#2002154


In JavaScript, void is not a keyword, it is not a type declaration, nor is it a variable name, and it is also not a function, nor is it an object. void is a prefix operator, similar to -, --, ++, and !. You can prefix it to any expression, and that expression will evaluate to undefined.

It is frequently used in bookmarklets, and inline event handlers, as in this somewhat frequent example:

<a href="javascript:void(0)">do nothing</a>

The way it's used in that example makes it look like a function invocation, when really it's just an overly clever way of getting the primitive undefined value. Most people don't really understand the true nature of void in JavaScript, and that can lead to a lot of nasty bugs and weird unexpected things happening.

Unfortunately, I think the void operator is the only truly guaranteed way to get the undefined value in JavaScript, since undefined, as pointed out in another answer, is a variable name that can be reassigned, and {}.a can be messed up by Object.prototype.a = 'foo'

Update: I thought of another way to generate undefined:

(function(){}())

Eh, a bit verbose though, and it's even less clear that returning "undefined" is its purpose.

Dataset: Ten Years of NFL Plays Analyzed, Visualized, Quizzified (Downloadable) - Statwing Blog

$
0
0

Comments:"Dataset: Ten Years of NFL Plays Analyzed, Visualized, Quizzified (Downloadable) - Statwing Blog"

URL:http://blog.statwing.com/dataset-ten-years-of-nfl-plays-analyzed-visualized-quizzified-downloadable/


Statwing is an easy-to-use data analysis tool, available for individual use or embedded into other products.

It’s third-and-3 and you desperately need a first down. What do you do, run or pass?

We’ve structured ten years of NFL play-by-play data (raw data complements of Advanced NFL Stats), thenuploaded it into Statwing for analysis. Now you can test your coaching instincts against the data.

Early in the game, the score is tied. You have fourth-and-goal at the 2-yard line. What should you do?

Go for it Kick a field goal If you do the math, it works out similarly either way

Wrong. You should go for it.

Wrong. You should go for it.

When teams go for it on fourth-and-goal from the 2, they get a touchdown 45% of the time. So on average teams get 3.1 points when they go for it—roughly the same amount they’d expect if they kicked a field goal, since only 2% of field goals are missed at that range.

Click the image to explore this analysis. If you want your analyses to save, though, you’ll need to use the link at the top of the page to play with the dataset.

But that’s not all. If you’re stopped on fourth-and-goal, the opponent starts with terrible field position. You’ll even get a safety about 5% of the time. By comparison, you can expect the opponent to start from the 23-yard line after a kickoff following a made field goal, and they will even have a 0.5% chance of returning the kickoff for a touchdown

Click the image to see the full NFL 2012 regular season dataset in Statwing. It contains all the analyses cited above.

Going for it and kicking a field goal both yield about 3 points on average, and the field position is much better if you go for it. Despite this, coaches usually kick a field goal on fourth-and-goal. During the last ten seasons coaches went for the touchdown about 20% of the time in this situation.

You’ve gotten out of answers correct so far.

It’s third-and-1. Which type of run is most likely to result in a first down?

Run it up the gut(between the linemen) Run off tackle(outside the linemen)

Wrong. Side story: your author’s mom always yelled at the Chiefs not to go up the middle on third-and-short. It saddens your author to find out that she was mostly likely leading the Chiefs astray.

Going up the gut just barely beats running around the end.

Click the image to see an even more detailed breakdown of running plays (e.g., off-center versus off-guard).

You’ve gotten out of answers correct so far.

On third-and-3, are you more likely to pick up a first down by running the ball or passing it?

Running Passing Running and passing are roughly equally likely to work

Sort of. Good enough.

Running was not statistically significantly more likely to yield a first down, but it did trend slightly above passing (51% vs 49%), so we’ll give it to you.

Wrong

Running actually trends towards being more effective than passing, at 51% vs 49% (though the difference isn’t statistically significant, so the best answer is “equally likely to work”.

Correct

Running was not statistically significantly more likely to yield a first down (though it did trend slightly above passing (51% vs 49%)
In case you’re curious, here are the odds of picking up a first down on third-and-x, split by running versus passing:

Runs are statistically significantly more effective with 1 yard to go, and passes are more effective with 4+ yards to go.

As an aside, coaches tend to pass on third with more than a yard to go.

Coaches very rarely run on third down with three or more yards to go.

You’ve gotten out of questions correct so far.

You need a two-point conversion. What kind of play should you call?

Running Passing Running and passing are equally likely to work.

During the last ten years running has succeeded 62% of the time, versus 46% for passing.

This seems odd because we just found out that running was only microscopically better than passing on third-and-2. But a two-point conversion is different from a typical third-and-2; the defense isn’t spread out, so it’s hard for receivers to find gaps in the coverage.

Click the image to see statistical data, explore this analysis, and play with the rest of the dataset in Statwing.

This suggests that coaches should run more often than they currently do.

Click the image to see the confidence intervals and play with the rest of the dataset in Statwing.

You’ve gotten out of answers correct so far.

Would you like to see pretty data visualizations about punting?

Yes No

Correct.You would, it seems.

Sorry, that’s incorrect, you would love to see pretty data visualization about punting. Suprised you didn’t know that.

Punts have gotten roughly half of a yard longer per season over the ten year period.

Click the image to see statistical data, explore this analysis, and play with the rest of the dataset in Statwing.

But does that mean punters are increasingly outpunting their coverage? After all, longer punts beget longer returns.

Like the other binned scatterplot, this visualization was made automatically in Statwing with 3 clicks.

Nope! We looked into it, and while returns have gotten longer, they only got longer at about .15 yards per season, and a pretty similar number of punts aren’t returned at all.


Thanks for playing

You got out of answers correct. When you try this quiz with a sorry quiz-taker like you, that’s the result you’re going to get. [That's a joke, we think you did fine :)]When you try this quiz with a sorry quiz-taker like you, that’s the result you’re going to get. [That's a joke, we think you did fine :)]When you try this quiz with a sorry quiz-taker like you, that’s the result you’re going to get. [That's a joke, we think you did fine :)]When you try this quiz with a sorry quiz-taker like you, that’s the result you’re going to get. [That's a joke, we think you did fine :)]When you try this quiz with a sorry quiz-taker like you, that’s the result you’re going to get. [That's a joke, we think you did fine :)]You’re the best quiz-taker in the game.

Tweet your quiz results

Discussion on Hacker News

A special thanks to David Laughlin, who wrote most of the copy for this post. David is available for freelance work at davidclaughlin@gmail.com.


See notes below to download the data.

Update: burntsushi from Hacker News created some tools that make it easy to query this kind of data. We haven’t looked at them in depth but they look much more efficient than the datasets we link to below.

Notes

The original data from Advanced NFL Stats is mostly free-text play descriptions, which we interpreted into structured data using Excel. The original data does have a few errors here and there, not all of which could be cleaned up. Some plays are missing, and a some plays have some inaccurate data, maybe 0.5%.

We’re very confident that you’ll have a much easier time exploring the data in Statwing than in Excel or another tool. So we encourage you to try analyzing it in Statwing first. To save your analyses, use this link, not the ones linked to in the images above.

But, if you don’t believe us or you want to modify the data, you can download the raw CSV of our version of the data.

In 2003, 2004, and 2005, the data doesn’t discriminate between a QB scramble and a run. So for many analyses (like many of the above), you’ll want to filter out those years.

We make the following assumptions throughout:

  • You are coaching the “average” team. Individual teams would vary, but the data we use is the average of all teams’ behaviors in whatever situation we are analyzing.
  • There are more than five minutes left in the half or game.
  • Unless otherwise noted, you’re between your ten and your opponents 20-yard line.
  • Yardage from penalties committed during a play is included in the outcome.
  • The hypothetical coach does not always call the same type of play in the same situation. That is, the coach randomizes play calling enough to be unpredictable while still favoring the more advantageous plays. The very awesome Brian Burke at Advanced NFL Stats does a great job ofdescribing randomization and game theory.

In the last ten years, there have only been a few instances where teams went for it on fourth-and-goal at the 2-yard line. It turns out, though, that going for the end zone on third-and-short is pretty similarly successful to going for it on fourth-and-short, so we used both types of plays in this analysis. For example, the 45% figure was calculated using both third- and fourth-and-2. Here, as with the rest of this fourth-and-2 analysis, we were inspired by a 2002 paper by David Romer. Romer is a notable Berkeley economist, and his wife chaired the White House Council of Economic Advisors in 2009 and 2010.

If objectives other than just picking up a first down are considered, there is evidence that running is better. Brian at Advanced NFL Stats does a great job of diving into that question, though you might want to learn about the concept of expected points before reading Brian’s analysis.


Patent troll CEO explains why company wants names of EFF donors | Ars Technica

$
0
0

Comments:"Patent troll CEO explains why company wants names of EFF donors | Ars Technica"

URL:http://arstechnica.com/tech-policy/2014/01/podcasting-patent-trolls-ceo-explains-why-it-wants-eff-donor-names/


Carl Malamud's "Geek of the Week" Internet radio show, which dates to 1993, is one piece of "prior art" that EFF says should knock out the Personal Audio podcasting patent.

The patent-holding company that wants all podcasters to pay up is just looking for a fair shake.

The CEO and general counsel of Personal Audio LLC got on the phone with Ars Technica to explain why the company is asking for the identities of more than 1,300 donors who have chipped in to help the Electronic Frontier Foundation fight its podcasting patent. The subpoena seeking donor identities and a wide array of other information connected to EFF's fight against the patent was revealed by EFF in a Wednesday blog post. EFF has moved to quash the subpoena in court, saying that while some donors are very public about their support, they also have a First Amendment right to contribute anonymously.

The fundraiser in question was kicked off by EFF to pay for what's called an "inter partes review" at the US Patent and Trademark Office. EFF sought to raise $30,000, but Personal Audio's attempt to make patent demands against podcasters struck a nerve: to date, about $80,000 has been raised from more than 1,300 donors.

Personal Audio CEO and general counsel Brad Liddle explained this morning that the company is just trying to make sure its opponents don't get two bites at the apple while the fight over the patent goes forth. With the IPR petition moving forward at the patent office and litigation proceeding in Texas federal courts, Personal Audio apparently suspects that the same people are behind both.  

"EFF insinuates the information we are seeking is not relevant to the Texas litigation," said Liddle in a brief interview with Ars. "But to the extent that other third parties have donated or assisted to the PTO proceeding—to the extent they've been working on the inter partes review—they should be bound by the result." 

Much of the prior art that has been presented to the patent office has also been brought up in the Texas cases, said Liddle. He believes that if the Texas defendants are involved in the patent office proceeding, they shouldn't be allowed to present their same defenses all over again in federal court.

"If there's a corporation or a person that has assisted EFF in the PTO proceeding, there's an estoppel argument" that should stop them from using the same defenses again, he said. Personal Audio shouldn't have to "engage in duplicative validity challenges, in expensive litigation."

The defendants in the Texas lawsuits include the Discovery Channel-owned HowStuffWorks podcast, NBC, CBS, and Fox,  as well as Ace Broadcasting (which produces Adam Carolla's podcast), and a smaller Internet radio company called TogiNet.

The inclusion of Lindale, Texas-based TogiNet appears to be a play to keep the larger defendants in Personal Audio's chosen venue, the Eastern District of Texas, which continues to be a popular venue for patent plaintiffs.

"If they want to find out whether the defendants in Texas donated, they can ask the defendants," pointed out EFF's Nazer—a point made in the group's motion to quash. "They don't have a reason to invade the privacy of more than 1,000 donors."

While the legal wrangling continues, old Internet shows dredged up by the EFF petition have gone a long way to set the historical record clear. Episodes of "Internet radio" shows date back to at least 1993, years before Personal Audio founder Jim Logan's filed patents connected to his failed "news-on-cassettes" business.

Given that there's no question Internet broadcasting pre-dated Logan's business, Ars asked if Liddle and his colleagues at Personal Audio felt that it was justifiable to keep pursuing small podcasters for royalty payments. "I'm not going to comment on that," he said.

Personal Audio's response to the EFF patent office petition is due next week, and the office is expected to decide whether or not to review the patents by early May. If it does institute a review, that process could take a year or more.

In addition to the Texas lawsuits, Personal Audio has sent out demand letters that have been recorded on EFF's "Trolling Effects" site. It's unclear how many letters it has sent.

Adjuncts in American universities: U.S. News should penalize colleges for using contingent faculty.

$
0
0

Comments:"Adjuncts in American universities: U.S. News should penalize colleges for using contingent faculty."

URL:http://www.slate.com/articles/life/education/2014/01/adjuncts_in_american_universities_u_s_news_should_penalize_colleges_for.html


Courtesy U.S. News Best Colleges

Changes are afoot among us part-time adjuncts who shoulder a hefty majority of college instruction in the United States. We have, for now, the attention of Congress. We’ve got our own snappy hashtags! And we’re methodically organizing ourselves into unions, in my town and yours. Administrations are noticing, and are none too pleased. Sometimes, they go to impressive lengths to prevent a vote. Other times, they just issue veiled threats, saying they’re “concerned” about faculty “ceding their individual right[s]” to the Service Employees International Union, an “outside organization” unfamiliar, “in all frankness,” with “the enterprise of higher education.”

Nice adjunct job you got there—it would be a shame if you didn’t exercise your right to self-determination, and something happened to it. But that’s just it: Adjunct jobs aren’t nice, and many of us feel, in all frankness, that we have little to lose. But sympathy for the adjunct’s plight is limited. (Read any comments section, ever, on any article with the word “adjunct” in it.) We chose, after all, to devote our lives to something so stupid and useless. Supply and demand. Find another job. Bootstraps. I get it.

But here’s what they don’t get: It’s not that adjuncts deserve better. It’s that students deserve better than adjuncts. And the people who decide which colleges are the “best” should be telling you this, but they’re not. That’s why I’m calling on U.S. News, the leading college ranking service in the country, to track the percentage of classes taught by adjuncts in their rankings—and penalize schools that use too many.

Here’s the cold, hard truth every prospective student, and every parent, should know: In the vast majority of subjects, when you have an adjunct professor instead of a full-timer, you are getting a substandard education. To say this, I am admitting that I myself provide subpar service to my students. But I do.

I’m not subpar on purpose—I, like most adjuncts, just don’t have the resources to treat students well. Like, you know, my own office, where I can meet with students when they’re free, instead of the tiny weekly window of time when I get the desk and computer (which runs Windows XP) to myself. I am on campus five hours a week, because when I’m not in the classroom, I have nowhere else to go. If my students need further explanation, they can talk to me in class, or they can wait for whatever terse, harried lines I email them back (if I do; with all the jobs I juggle, sometimes I forget). I teach the same freshman survey over and over again, so I rarely have a student more than once, and thus never build a mentoring relationship with anyone. I am, by virtue of the parameters of my position, not giving students anything remotely near their money’s worth. And hundreds of thousands of adjuncts in the United States are just like me. Most of those adjuncts would be giving their students a much better education, were they only provided the support that a college gives its full-time faculty. But they aren’t, and the effect on student learning is—surprisedeleterious.

As much as I support efforts to mobilize and unionize, we also need a different tactic. Today’s students view themselves as customers, and college as an excruciatingly expensive service. Most humanists balk at this crass, Randian characterization, but not me. I cleave onto it wholeheartedly, because it is in the revelation of the poor “service” adjuncts provide that we might finally hit universities where it hurts: their rankings.

Destroy the standing of any institution that does not have a sizeable majority of faculty that are full-time.

Institutions, no matter what they say, are mortally invested in their placement on the U.S. News Best Collegeslist. But although the freely available rankings share data about endowment, SAT scores, class size, and student-to-faculty ratio, they do not list percentage of part-time faculty. That does not mean the ranking metric doesn’t include this data, explained Robert J. Morse, U.S. News’ director of research data. Morse assured me U.S. News is actually “far ahead of the game” on holding institutions who overuse part-time faculty accountable, because in their ranking factor, “schools get more credit for a larger proportion of full-time faculty.” He explained that schools who use a large portion of part-time faculty “score lower” on the ranking metric, but wouldn’t specify how much lower. Is overuse of part-time faculty as bad as a meager endowment? Worse than lackluster SAT scores? It really should be. (If you’d like to do the math yourself, here’s the U.S. News formula for 2014; proportion of full-time faculty makes up 5 percent of the “faculty resources” indicator, which itself makes up 20 percent of the ranking model.)*

To be truly ahead of the game, the “percentage of faculty who are full-time” should be front and center on the rankings list, before even student-to-faculty ratio. Instead, it’s tucked away inside the paid version of U.S. News’ ranking website, so most “education consumers” will never see it—even though it should be the first thing you ask when you and your kid are touring a campus. Whether or not some sports nut who graduated in 1952 gives bank to the football team should matter much, much less than whether or not your professor has slept in a heated house, and thus prepared your lesson effectively.

U.S. Newsanditsilk must enjoy the power they have over these hapless institutions—so they ought to wield that power for good. Don’t just factor in the use (and overuse) of part-time faculty, but all contingent faculty. Destroy the standing of any institution that does not have a sizeable majority of faculty that are full-time, preferably tenure-track or tenured. Or, at the very least, list the institution’s percentages front and center, with an explanation of why this factor is so crucial in choosing a college. Because this information is notforthcoming on most university websites—and you can see why, when organizations such as the Ohio Part-Time Faculty Association report that my former employer, the “public Ivy” Ohio State, has 65 percent contingent faculty. And yet it’s still ranked on U.S. News as the No. 52 national university in the country. Why? With a percentage like that, it shouldn’t even be in the double digits, no matter how much wealthy former Buckeyes donate.

If a college or university’s ranking—and concurrently, as others are calling for, even its accreditation—could be openly and seriously damaged by the overuse of contingent faculty, then and only then would students and parents actually begin to care, and they’d vote with their tuition. And then and only then would administrations actually begin to … well, “care” isn’t the right word. Let’s say they’d finally find something about contingent faculty to be concerned about, other than the union.

*Correction: This article misstated that U.S. News’ ranking metric is private; the magazine published its 2014 formula here. (Return.)

User Onboarding | A frequently-updated compendium of web app first-run experiences

Facebook Turns 10: The Mark Zuckerberg Interview - Businessweek

$
0
0

Comments:"Facebook Turns 10: The Mark Zuckerberg Interview - Businessweek"

URL:http://www.businessweek.com/articles/2014-01-30/facebook-turns-10-the-mark-zuckerberg-interview#p3


Photograph by Graeme Mitchell for Bloomberg BusinessweekBehind this week’s cover

Mark Zuckerberg doesn’t usually observe sentimental anniversaries. This year he’s confronted by three of them. On Feb. 4, Facebook (FB), the company he co-founded in a Harvard University dorm, turns 10 years old. The prodigy himself turns 30 in May. It’s also been a decade since his first date with Priscilla Chan, now his wife, whom he first met in line for the bathroom at a Harvard fraternity party.

So last fall, Zuckerberg began typing up dozens of pages of musings, often pecking out the words on his phone. He shaped his thoughts into 3-, 5-, and 10-year plans. He also gave himself a specific goal for 2014. He’s fond of annual challenges, and in previous years he’s vowed to learn Mandarin (2010), to eat only animals he slaughtered himself (2011), and to meet someone new each day (2013). For this year he intends to write at least one well-considered thank-you note every day, via e-mail or handwritten letter.

“It’s important for me, because I’m a really critical person,” he says at Facebook’s sprawling corporate campus in Menlo Park, Calif. “I always kind of see how I want things to be better, and I’m generally not happy with how things are, or the level of service that we’re providing for people, or the quality of the teams that we built. But if you look at this objectively, we’re doing so well on so many of these things. I think it’s important to have gratitude for that.” He’s still unnaturally boyish and is wearing his customary uniform: hoodie, gray T-shirt, and jeans. No Adidas shower sandals, though; in what could be construed as a sign of creeping maturity, he’s wearing black Nike sneakers.

Zuckerberg, Facebook’s chairman and chief executive officer, has many reasons to be grateful. His social network is used by 1.23 billion people around the world. The company is worth around $135 billion and will probably become the fastest in history to reach $150 billion. Its recent financial results have impressed Wall Street, in part with the success of its shift to mobile phones. In the fourth-quarter earnings report it filed on Jan. 29, Facebook disclosed that for the first time sales from ads on mobile phones and tablets exceeded revenue from traditional PCs. The shift to mobile was “not as quick as it should have been,” Zuckerberg says, but “one of the things that characterizes our company is that we are pretty strong-willed.”

Photograph by Ryan BradleyZuckerberg and co-founder Dustin Moskovitz in Palo Alto, 2004

Facebook’s challenge is to keep growing. With almost half the world’s Internet-connected population using the service, the company is facing the immutable law of large numbers and simply can’t keep adding users at its previously torrid rate. At the same time, Facebook must defend its highly profitable business against several threatening trends. Internet users—particularly young ones—crave different kinds of online experiences and new ways of connecting with one another. Many lead online lives that begin and end without Facebook. Rivals such as Twitter (TWTR) and Snapchat, with their embrace of pseudonyms and different ways of sharing publicly and privately, have grown up outside the once-inexorable Facebook ecosystem. Silicon Valley’s smartest product developers, who used to make games and other diversions that lived on Facebook, are instead applying their talents to creating apps that compete with it. “No one individually has quite yet displaced Facebook,” says Keith Rabois, a partner at Silicon Valley venture capital firm Khosla Ventures. “But as more and more people choose another social platform as their primary hub, it’s a real problem. They could be losing one segment at a time.”

Zuckerberg says companies often lose their way during major transitions. His company hasn’t, he says, so “we’re really at this point where we can take a step back and think about the next big things that we want to do.”

 
 
Early in 2012, Zuckerberg called an all-hands meeting and dramatically declared that the company would be “mobile first.” He then reinforced that focus by unceremoniously ending any meeting where employees began their presentations talking about computers rather than smartphones. And his three-year plan remains all about strengthening Facebook’s presence in mobile. “Mark had to learn how to run a mobile-first company in the last two years, which meant thinking differently about how he ran teams, how products were built, and which engineering skills we needed,” says Sheryl Sandberg, Facebook’s chief operating officer. “He made that shift so quickly.”

The process hasn’t been entirely smooth. Facebook contemplated building its own smartphones and decided against it. Last year it introduced software called Facebook Home to customize devices running Google’s (GOOG) Android software, which flopped. Now it’s concentrating on a third approach: standalone apps, lots of them. On Jan. 30, Facebook plans to release the first in a series of mobile apps as part of an initiative it’s calling Facebook Creative Labs.

Many of these apps will have their own brands and distinct styles of sharing. The first, called Paper, looks nothing at all like a Facebook product. If Facebook is the Internet’s social newspaper, Paper strives to be its magazine: photos, friend updates, and shared articles show up in an image-heavy, uncluttered way. The stories are picked and ordered based largely on how much they are shared and “liked” on Facebook, with a team of human editors ensuring that the content comes from the right sources. The app includes a few neat interface tricks such as a panoramic mode, which lets users navigate to different sections of a photograph by tilting their device in different directions. “We just think that there are all these different ways that people want to share, and that compressing them all into a single blue app is not the right format of the future,” Zuckerberg says. In other words, the future of Facebook may not rest entirely on Facebook itself.

The company’s first significant move toward becoming a diversified app power was buying Instagram. Facebook snapped up the photo-sharing app for $1 billion in April 2012, and the marriage appears to be a happy one. According to a recent study by Pew Research Center, 57 percent of Instagram users visit the service on a daily basis. It’s the second-highest engagement rate of any social network, after Facebook.

Last year, Facebook bid $3 billion to buy Snapchat, the trendy social networking app where photographs vanish after a few seconds. But Snapchat co-founder Evan Spiegel, a 23-year-old Stanford University dropout, seems to view Facebook as Zuckerberg himself once regarded Google, and as Google’s founders once saw Microsoft (MSFT): as an establishment power to be combatted and occasionally mocked. Spiegel rejected the entreaty and posted screen shots of e-mail conversations with Zuckerberg on Twitter.

Asked about having his private messages made public, Zuckerberg seems pensive, not upset. “Oh, I don’t know, that’s probably not what I would have done,” he says, and then suggests that Spiegel’s move was a forgivable error in judgment. “Whenever I speak to entrepreneurs, they always ask me what mistakes [they] should try not to make. I actually think that the thing is, you’re just going to mess up all this stuff, and we have [as well].”

Although Snapchat doesn’t reveal how many users it has, some reports suggest that it, not Facebook, is the social network to beat among teenagers; Snapchat already handles more photos every day than Facebook. IStrategy Labs, a social media consulting firm, recently reported that Facebook’s teenage user base has fallen 25 percent since 2011. Facebook executives, including Zuckerberg, question the accuracy of such reports and note that a majority of teens still use Facebook every day, at a rate unmatched by any rival. That’s not to say Zuckerberg and his team are dismissive. “Generally, when a product is very successful, we spent a lot of time talking about why,” says Bret Taylor, Facebook’s former chief technology officer, who left the company in 2012 to form his own startup creating word processing tools for mobile phones. “Mark is very willing to recognize the strengths in other products and the flaws in Facebook.”

Working out those flaws and improving Facebook got harder the more successful the social network became. When a fifth or so of the human species uses your product, changing it is no small matter. Last spring the company unveiled a revamped News Feed, the stream of status updates, news articles, and photos that make up the social network’s central artery of information. Although the changes made their way into the mobile app first, Facebook never fully rolled them out onto desktop computers, because users who tested it disliked it. Over the years the company has also introduced such features as a question-and-answer service, a “check-in” tool that allowed users to broadcast physical locations to their friends, and a digital currency called Facebook Credits. They’ve all been stuffed into the primary social network and then largely ignored by Facebook’s members.

Facebook has had success recently with one homegrown standalone app: Facebook Messenger, which was updated last year and vastly increased the use of the social network’s chat service. It’s now the 12th-most-downloaded free app on Apple’s App Store, ahead of the main Facebook app itself. Facebook has had messaging capability for a while, but it was just another feature buried in a great big social network. It used to be “behind two taps, every single time you want to use it,” says Chris Cox, Facebook’s vice president for product and one of Zuckerberg’s longtime confidants. “[That] was a huge, huge amount of friction to add.”
 
 
In December the company gathered its engineers for a brainstorming and coding session to kick off Facebook Creative Labs. Most such “hackathons” last a day; this one went on for three, and participants were told to prepare for it a month and a half in advance. Mike Vernal, a vice president for engineering, calls it the most energetic hackathon he’s seen at the company. Zuckerberg says about 40 ideas emerged from the event. While he won’t share them, he says as many as half a dozen could be introduced this year under the Creative Labs umbrella and suggests one could be tailored for Facebook Groups, an often overlooked feature of the social network that allows clusters of members to communicate privately.

One thing about some of the new apps that will come as a shock to anyone familiar with Facebook: Users will be able to log in anonymously. That’s a big change for Zuckerberg, who once told David Kirkpatrick, author of The Facebook Effect, that “having two identities for yourself is an example of a lack of integrity.”

At the time of Facebook’s founding, there was no such thing as real identity online. Facebook became the first place where people met one another as themselves, and the company was stubborn about asking users to sign in and share material with their own names. A Facebook account became a sort of passport to the rest of the Web, and with its success came new problems. No teenager wants to share insane party pics with a group of friends that may include his or her parents and teachers. And dissidents in parts of the world where speaking freely can be incriminating avoided the service in favor of alternatives such as Twitter, where real names are optional.

Former Facebook employees say identity and anonymity have always been topics of heated debate in the company. Now Zuckerberg seems eager to relax his old orthodoxies. “I don’t know if the balance has swung too far, but I definitely think we’re at the point where we don’t need to keep on only doing real identity things,” he says. “If you’re always under the pressure of real identity, I think that is somewhat of a burden.” Paper will still require a Facebook login, but Zuckerberg says the new apps might be like Instagram, which doesn’t require users to log in with Facebook credentials or share pictures with friends on the social network. “It’s definitely, I think, a little bit more balanced now 10 years later,” he says. “I think that’s good.”

Facebook executives seem eager to manage expectations around these apps, including Paper, saying they’re tailored for smaller audiences and won’t achieve blockbuster, billion-user success anytime soon. This caution may stem from flops such as Facebook Home and the revamped News Feed, which were both heavily promoted, as well as Poke, a Snapchat-like app that Facebook introduced a few years ago and went nowhere. Zuckerberg says Poke was “more of a joke. A few people built it as a hackathon thing, and we made one release and then just kind of abandoned it and haven’t touched it since.” Facebook also doesn’t need to replicate its massive success with these new apps. It’s one of the most profitable companies in the world: In the most recent quarter, its net income was $780 million and operating margins were 56 percent, excluding certain accounting items. The company is sitting on $11.45 billion in cash. You can afford to do a lot of experimentation and make a lot of mistakes with that kind of money.
 
 
Over five years, Zuckerberg wants Facebook to become more intuitive and to solve problems that in some cases users don’t even know they have. Between 5 percent and 10 percent of posts on Facebook involve users posing questions to their friends, such as requests for the names for a good local dentist, or the best Indian restaurant. The company, he says, should do better at harvesting all that data to provide answers. It’s going to be an enormous challenge. Zuckerberg is steering his company right into the domain of Google, which reliably answers most questions online and is one of the few companies with the pockets and will to outspend anyone trying to push the technological boundaries of search. For example, Google recently outmaneuvered Facebook in acquiring DeepMind Technologies, a British artificial intelligence company working on ways to understand and answer complex queries.

Last year, Facebook introduced a Google-esque tool called Graph Search. It’s been a disappointment. When it’s suggested that Graph Search works about half the time, Zuckerberg says that’s being generous. Vernal, the engineering vice president, says Graph Search was the last major product designed primarily for desktop computers. It’s now being redesigned for phones. He cites the opportunity to use a user’s location to deliver results relevant to where they are. If a user is traveling in New Zealand, for example, Facebook should serve up previous updates and insights from Facebook members who have visited Auckland. Vernal says harvesting all this data, amid some trillion status updates posted throughout Facebook’s history, is “a multiyear journey.”

Zuckerberg also has some aggressive personal goals. He’s accelerating his philanthropy and is far beyond where tech titans such as Steve Jobs and Bill Gates were at a similar age. His net worth exceeds $24 billion, ranking him the 26th-wealthiest person in the world, according to the Bloomberg Billionaires Index. He is also the youngest of the top 150. Zuckerberg and Chan recently donated $1 billion to the Silicon Valley Community Foundation, a local organization that gives grants to nonprofits in education, health, and the environment. In January they separately pledged $5 million to a family health center in the disadvantaged Silicon Valley community of East Palo Alto. As for a family of his own, Zuckerberg says his wife is ready and he is not. “I just want to make sure when I have kids, I can spend time with them,” he says. “That’s the whole point.”

When he’s asked to share an overarching conclusion from his period of contemplation over the holidays, Zuckerberg gets serious. “I’m just really lucky,” he says. “I really feel this deep responsibility, and I try to help folks here feel how unique of a position we’re in, and that we need to do the best that we can.”

Zuckerberg grows passionate talking about his 10-year plan. He doesn’t see Facebook building infrastructure computing services, as Amazon.com (AMZN) has with its cloud initiative, or operating systems and wearable computers, as Google or Apple (AAPL) have. His mission is to expand access to the Internet for the billions of people who have yet to visit the Web. Facebook formed a group called Internet.org last summer with six other technology companies, including Samsung Electronics, Qualcomm (QCOM), and Ericsson (ERIC), to simplify their services so they can be delivered more economically over primitive wireless networks and tapped into using cheaper phones. Early tests are promising, Zuckerberg says. More users in undeveloped countries will subscribe to mobile services for the opportunity to use Facebook, which in turn makes it more economical for mobile operators to improve their wireless networks to support higher-bandwidth services such as online education and banking.

The vision is admirable but risky. Facebook could help to bring entire countries online, only to watch their populations flock to a local social network, as users have in China and South Korea. Facebook’s board members have asked Zuckerberg whether there’s money to be made in this initiative, and he admits the answer is largely hypothetical. “If we can help develop some of these economies, then they will turn into markets that our current business can work in,” he says. Sandberg adds that this won’t happen soon. “We’re never going to charge for the product, and there’s really no ad market” in these low-income countries, she says. “Mark is unapologetic about his idealism. He always said Facebook was started not just to be a company, but to fulfill a vision of connecting the world.”

Chris Hates Writing • Am I dreaming?

$
0
0

Comments:"Chris Hates Writing • Am I dreaming?"

URL:http://chrishateswriting.com/post/75158854851/am-i-dreaming


Am I dreaming?

After reading Mark Zuckerberg’s recent comments about the company’s stance on identity, I am almost shaking with excitement. I feel like I’ve waited years for this day, and frankly thought it may never come.

Two months ago I said a huge opportunity awaited those who dared to go where Facebook and Google wouldn’t. Who’d a thunk that company would be… Facebook? That the standard bearer for what most people think of as “online identity” would reconsider its own position is both shocking and encouraging.

As someone who has advocated for anonymity and ephemerality for the past ten plus years, I am extremely excited. I see a lot of my own convictions represented in Evan Spiegel’s recent keynote at the AXS Partner Summit, and hope Snapchat continues to succeed at bringing these ideas into the mainstream. All we need now is for Google to wake up and actually innovate with Plus, and we’ll truly have a party!

The time for reimagining modern identity has finally come.

Viewing all 9433 articles
Browse latest View live