Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Partial Application in JavaScript using bind()

$
0
0

Comments:"Partial Application in JavaScript using bind()"

URL:http://passy.svbtle.com/partial-application-in-javascript-using-bind


There is a pattern in JavaScript that I consider highly underused, because it leads to more concise code that is both easier to write and read. You probably all know of Function.prototype.bind. It’s used most of the time to get rid of all these var that = this or var self = this assignments you used to see everywhere. A common example would be:

this.setup = function () {
 this.on('event', this.handleEvent.bind(this));
};

The first argument passed to bind will serve as this within the scope of the function it returns. A lesser known property of bind is that it accepts more than one parameter. Every parameter to bind after the first will be prepended to the list of parameters when invoking the bound function.

That means we can create partially applied functions like this:

var add = function (a, b) {
 return a + b;
};
var add2 = add.bind(null, 2);
add2(10) === 12;

Exciting, eh? It’s more obvious when this can get helpful when we extend the initial event handling example. Another common pattern when handling events is that you want to provide some content when calling the handler:

this.setup = function () {
 this.on('tweet', function (e, data) {
 this.handleStreamEvent('tweet', e, data);
 }.bind(this));
 this.on('retweet', function (e, data) {
 this.handleStreamEvent('retweet', e, data);
 }.bind(this));
};

If the event handlers for tweet and retweet share much of their logic, it’s a good idea to structure your code like this. The downside, however, is obvious. You have a lot of boilerplate code. In both cases, you need to create an anonymous function, call the event handler in there, pass on the parameters and remember to bind the function so the this context is properly set up.

Can’t we make this simpler? Indeed we can!

this.setup = function () {
 this.on('tweet', this.handleStreamEvent.bind(this, 'tweet'));
 this.on('retweet', this.handleStreamEvent.bind(this, 'retweet'));
};

Beautiful, isn’t it? So what happened here? Instead of calling the function within an anonymous wrapper, we create two partially applied functions that take the this context and two different first parameters for both of them. e and data will be passed on without us having to worry about it.

If you are like me a few months ago, this is the point where you raise your eyebrows in shock and go through your code to clean up all these occurrences. When you’re done, please tell your friends.

  420 Kudos   420 Kudos

Workers bought SUGAR from supermarket to slow cement flood 'farce' on Victoria Line - Transport - News - London Evening Standard

$
0
0

Comments:" Workers bought SUGAR from supermarket to slow cement flood 'farce' on Victoria Line - Transport - News - London Evening Standard "

URL:http://www.standard.co.uk/news/transport/workers-bought-sugar-from-supermarket-to-slow-cement-flood-farce-on-victoria-line-9081168.html?origin=internalSearch


Victoria Line services from Brixton to Warren Street were suspended overnight after the mixture being poured into foundations of a £700m upgrade at Victoria leaked into a bunker packed with vital signalling equipment.

Contractors worked through the night to repair the damage.  Peter McNaught, Operations Director for the Bakerloo, Central and Victoria lines, said: “Our engineers have worked tirelessly through the night and have successfully repaired the damaged signalling equipment.  A good service is now operating across the Victoria line.

“We again apologise to our customers who were affected by yesterday’s disruption.”

A source told the Standard contractors were dispatched to nearby supermarkets to buy bags of sugar in a desperate bid to stop the knee-deep concrete from setting.

TfL today came under fire for initially informing commuters the line closure was due to “flooding” but the cover was blown when a worker posted photographs showing racks signalling equipment submerged in cement.

Concrete: a picture apparently showing damage to a control room. Credit: Usvsth3m.com

The worker said: “The only word for it is a f*** up of major proportions. Everyone was f-ing and blinding when they realised what had happened.

“It was knee-deep in the signal room swamping all the relay equipment and it’s going to be very, very expensive to repair because it was all brand new.”

 

Furious commuters today branded the blunder a “farce”. Trains were suspended from 1.30pm yesterday causing delays and misery for thousands across much of central London including Victoria and Oxford Circus stations.

Steve Williams, 27, an IT worker who commutes in from Oxford, said: “It’s a rubbish excuse to say there was a signal failure. They should just be honest in the first place.

“It’s stupid more than anything. I guess stupidity has ruled the day here.”

Daniel Amran, 26, an energy analyst, from Hendon, said: “It’s pretty awful to be honest. That particular route has been delayed three times in the last three days.”

Robert Flagg, 26, an engineer, from Canada Water, said: “As an engineer I’m thinking why was it there in the first place? They probably need to rethink their procedures.”

Richard Donovan, 37, a communications manager from Nottingham, said: “It’s one of those where someone has obviously made a horrendous mistake, but these things happen.”

The concrete was being used by private contractors who are building a major upgrade to the station due to be completed in 2018.

They were pouring it into voids in excavations for a new escalator control room when it seeped through to the control room below.

The source said: “The signalling room sits between the north and southbound tracks and it’s the main box controlling trains for the entire southern half of the line. They were pumping in concrete for an escalator tunnel down to the new platform and there must have been a crack or a hole.

“When they realised what had happened they went out and bought bags of sugar and threw it on because that stops concrete from setting as quickly.

“Hopefully it won’t take too long to sort out because they caught it in time before it set, but it’s not something that can be done overnight.

“TfL were telling people the reason for the closure was flooding, because technically it was. They just didn’t say it was flooded with concrete. They didn’t want people to know what a cock-up it was.”

Nigel Holness, operations director of London Underground, said: “Our engineers are working hard to resolve the situation as soon as possible to get services back up and running.”

How Twitter responded with flood of jokes

James Martin: BREAKING NEWS: Cement brings the Victoria Line to a halt. Mortar follow.

Tim Miller: There’s no hard and fast rule for dealing with these sort of situations, unfortunately.

Paul Silburn: Concrete on the victoria line is affecting services to Brickston and connections to the Cementral Line

Mike White: So the Victoria Line is closed due to a cement spillage. Customers are advised to use the Blue Circle line.

BBC London Travel: Rail Repla-Cement bus,

Ange: So the Victoria line will be down for the foreseeable future due to concrete flooding, nothing’s set in stone though

Tom Edwards: Grout news this morning was that the line was running again

Boris Watch:“quick drying concrete” is almost certainly not what it was

mpw/Objective-Smalltalk · GitHub

Awesomebox

$
0
0

Comments:"Awesomebox"

URL:http://awesomebox.co


Awesomebox is for everyone - product managers, developers, designers, clients, marketers and anyone else who needs to be involved with the development process. Awesomebox lets more people can be part of the process without getting in the way.

Any code that is executed inside a web browser (HTML/CSS/JS) works with Awesomebox. Anything from a simple marketing website to a complex AngularJS or Backbone app can be built using Awesomebox.

Nope! When you sign up, you'll be able to try out Awesomebox on an example website. When you're ready to use it on your own project, we'll help the developers you work with get setup and answer any technical questions they have.

DEA teaches agents to recreate evidence chains to hide methods | Muckrock

$
0
0

Comments:" DEA teaches agents to recreate evidence chains to hide methods | Muckrock"

URL:https://www.muckrock.com/news/archives/2014/feb/03/dea-parallel-construction-guides/


Trainers justify parallel construction on national security and PR grounds: "Americans don't like it"

by Shawn Musgrave on Feb. 3, 2014, 10:30 a.m.

Drug Enforcement Administration training documents released to MuckRock user C.J. Ciaramella show how the agency constructs two chains of evidence to hide surveillance programs from defense teams, prosecutors, and a public wary of domestic intelligence practices.

In training materials, the department even encourages a willful ignorance by field agents to minimize the risk of making intelligence practices public.

The DEA practices mirror a common dilemma among domestic law enforcement agencies: Analysts have access to unprecedented streams of classified information that might prove useful to investigators, but entering classified evidence in court risks disclosing those sensitive surveillance methods to the world, which could either end up halting the program due to public outcry or undermining their usefulness through greater awareness.

An undated slide deck released by the DEA to fleshes out the issue more graphically: When military and intelligence agencies “find Bin Laden's satellite phone and then pin point [sic] his location, they don't have to go to a court to get permission to put a missile up his nose." Law enforcement agencies, on the other hand, “must be able to take our information to court and prove to a jury that our bad guy did the bad things we say he did.”

The trainer’s notes continue, “In the old days, classified material was poison. In some ways, it still is… because if treated incorrectly, it can screw up your investigation."

A tactic known as “parallel construction” allows law enforcement to capitalize on intelligence information while obscuring sensitive sources and surveillance methods from the prosecution, defense and jury alike. DEA training documents suggest this method of reconstructing evidence chains is widely taught and deployed.

Last August, Reuters first reported on the practice of parallel construction by the DEA’s Special Operations Division (SOD), a secretive unit that includes representatives from the FBI, CIA and NSA. Slides obtained by Reuters defined the method as "the use of normal investigative techniques to recreate the information provided by SOD." But documents released to Ciaramella indicate that DEA trainers routinely teach the finer points of parallel construction to field agents and analysts across the country, not just within SOD.

The bulk of the release comprises eight versions of a training module, “Handling Sensitive Information.” Per lesson cover sheets, the module was created in 2007 for inclusion in entry-level analyst training programs, as well as for workshops at DEA field offices. The most recent dated revision in the release is from May 2012:

The module puts the issue of using sensitive intelligence in law enforcement a bit more delicately. Per the 2012 lesson plan, the main problem with combining intelligence collection with law enforcement investigations “is the high potential for disclosure of these sensitive sources of information in our open, public trial system.”

In addition to potential national security risks of exposing classified information and constitutional quandaries, an earlier version of the module highlights another issue with introducing sensitive or clandestine evidence into domestic trials: “Americans don’t like it.”

The instructor’s notes from the same revision clarify the public pushback rationale.

Given the “fish bowl” nature of law enforcement work, DEA Academy graduates are guided to only use techniques “which are acceptable to our citizens.”

Controversy notwithstanding, parallel construction apparently makes the DEA’s list of such palatable techniques. The modules make clear that the idea is to shape evidence chains so that neither the prosecution nor the defense are to be made aware of classified information, if it can be helped.

When the court is made aware of classified evidence, a wholly separate—if unfortunately named—squad of prosecutors called the Taint Review Team will consult with the judge to determine which evidence must be turned over to the defense.

As described in the released portions of the module, parallel construction simply entails splitting the prosecutorial labor, with a Taint Review Team tackling pre-trial review so the trial prosecutor encounters as little classified evidence as possible.

But the released training modules provide no guidance on key issues noted in documents obtained by Reuters last August. In particular, the SOD slides barred agents from disclosing classified sources on affidavits or in courtroom testimony. Under this strain of parallel construction, the court would never know the classified origins of an investigation.

"You'd be told only, ‘Be at a certain truck stop at a certain time and look for a certain vehicle.' And so we'd alert the state police to find an excuse to stop that vehicle, and then have a drug dog search it," as one former federal agent described the process to Reuters.

While there are no direct references to protocols of this kind, three additionalslidedecks released to Ciaramella cover traffic stops and drug dog sniffs extensively. These presentations are heavily redacted, but released portions address the advantages of pairing “tip information” and “vertical information transfers” with routine traffic stops as a pretext for making an arrest.

This same presentation offers guidance to officers wondering whether they should lie under oath rather than reveal that information came from a classified source.

DEA trainers advise officers in this position to let the prosecutor know “so that he or she can proactively address any issues” with the evidence in question, regardless of “where the information came from.”

The unprecedented window these training documents give into the parallel construction method still leaves many questions unanswered, especially when it comes to logistics and legal justifications. What could not be clearer, though, is the DEA’s stance that law enforcement must vigilantly protect intelligence resources by all possible means.

Join MuckRock and start submitting requests for government documents today. Stay on top of FOIA news by signing up for our mailing list, follow us on Twitter, or "Like" us on Facebook.

Paper – stories from Facebook on the App Store on iTunes

News | FiftyThree | EVERY STORY HAS A NAME FiftyThree’s story began...

$
0
0

Comments:"News | FiftyThree | EVERY STORY HAS A NAME FiftyThree’s story began..."

URL:http://news.fiftythree.com/post/75486632209/every-story-has-a-name-fiftythrees-story-began?new_url=true


EVERY STORY HAS A NAME

FiftyThree’s story began with Paper. What began with three guys building an app out of a New York City apartment has gone on to become one of the most celebrated applications on iOS, defining mobile creativity and winning Apple’s 2012 iPad App of the Year. Paper embodied our belief that technology should support the human need to create. It’s a beautifully simple app that lets anyone capture their ideas and share them over the web. For millions of creators around the world, Paper is where they call home for their ideas—100 million, in fact, over the last two years. Paper has come to represent endless creative potential, and we couldn’t have asked for a better beginning to our story.

Stories have twists.

So it came as a surprise when we learned on January 30th with everyone else that Facebook was announcing an app with the same name—Paper. Not only were we confused but so were our customers (twitter) and press (1,2,3,4). Was this the same Paper? Nope. Had FiftyThree been acquired? Definitely not. Then, what’s going on?

We reached out to Facebook about the confusion their app was creating, and they apologized for not contacting us sooner. But an earnest apology should come with a remedy.

Stories reveal character.

There’s a simple fix here. We think Facebook can apply the same degree of thought they put into the app into building a brand name of their own. An app about stories shouldn’t start with someone else’s story. Facebook should stop using our brand name.

On a personal level we have many ties to Facebook. Many friends, former students and colleagues are doing good work at Facebook. One of Facebook’s board members is an investor in FiftyThree. We’re a Facebook developer, and Paper supports sharing to Facebook where close to 500,000 original pages have been shared. Connections run deep.

What will Facebook’s story be? Will they be the corporate giant who bullies their developers? Or be agile, recognize a mistake, and fix it? Is it “Move fast and break things” or “Move fast and make things”?

We’re all storytellers. And we show care for each other by caring for our stories. Thanks for supporting us.

Georg Petschnigg
Co-Founder and CEO
FiftyThree

{{title || 'SlaveryStories.org - Written Memoirs and Audio from American Slaves'}}


ActorDB | A distributed SQL database with the scalability of a KV store

Out in the Open: Man Creates One Programming Language to Rule Them All | Wired Enterprise | Wired.com

$
0
0

Comments:"Out in the Open: Man Creates One Programming Language to Rule Them All | Wired Enterprise | Wired.com"

URL:http://www.wired.com/wiredenterprise/2014/02/julia


Stefan Karpinski was building a software tool that could simulate the behavior of wireless networks, and his code was a complete mess. But it wasn’t his fault.

As a computer science grad student with years of industry experience under his belt, Karpinski was far from a programming novice. He knew how to build software. The problem was that in order to build his network simulation tool, he needed four different programming languages. No single language was suited to the task at hand, but using four languages complicated everything from writing the code to debugging it and patching it.

Dubbed Julia, it provides an early glimpse into what programming languages might look like in the not too distant future

It’s a common problem for programmers as well as mathematicians, researchers and data scientists. So Karpinski set out to solve it. He and several other computer scientists are building a new language they hope will be suited to practically any task. Dubbed Julia, it provides an early glimpse into what programming languages might look like in the not-too-distant future.

Today’s languages were each designed with different goals in mind. Matlab was built for matrix calculations, and it’s great at linear algebra. The R language is meant for statistics. Ruby and Python are good general purpose languages, beloved by web developers because they make coding faster and easier. But they don’t run as quickly as languages like C and Java. What we need, Karpinski realized after struggling to build his network simulation tool, is a single language that does everything well.

At one point, he vented his frustrations to Viral Shah, a fellow grad student at the University of California Santa Barbara. Shah introduced him to a computer scientist named Jeff Bezanson. It so happened that Bezanson had recently made a study of language design, and had come to the conclusion that the tradeoffs inherent in most languages were avoidable. “It became clear that a lot of it had been designed haphazardly,” Bezanson says. “If you started from the beginning, you could recreate the things that people liked about those languages without so many of the problems.”

Soon the team was building their dream language. MIT, where Bezanson is a graduate student, became an anchor for the project, with much of the work being done within computer scientist and mathematician Alan Edelman’s research group. But development of the language remained completely distributed. “Jeff and I didn’t actually meet until we’d been working on it for over a year, and Viral was in India the entire time,” Karpinski says. “So the whole language was designed over email.”

Together they fashioned a general purpose programming language that was also suited to advanced mathematics and statistics and could run at speeds rivaling C, the granddaddy of the programming world.

Programmers often use tools that translate slower languages like Ruby and Python into faster languages like Java or C. But that faster code must also be translated — or compiled, in programmer lingo — into code that the machine can understand. That adds more complexity and room for error.

Julia is different in that it doesn’t need an intermediary step. Using LLVM, a compiler developed by University of Illinois at Urbana-Champaign and enhanced by the likes of Apple and Google, Karpinski and company built the language so that it compiles straight to machine code on the fly, as it runs.

What’s more, the team says, it has the mathematical and statistical chops to serve as an alternative to Hadoop, the widely used data crunching system developed at Yahoo and Facebook— at least in some cases. Hadoop lets you take a large problem, break it up into many small problems, and spread them across hundreds of machines. Julia is also designed parallelism.

“We never expected it to be a drop-in replacement for Hadoop, but we wanted to be able to do some of the stuff you do in Hadoop,” Karpinski says. “You can startup 100 Julia processes and run them on different machines, and fetch the results from other machines. That kind of thing tends to be tedious work in Java, but in Julia, it’s relatively straight forward.”

The first public version of Julia was released in early 2012. Many were skeptical about the need for yet another programming language, but enough people shared the frustrations of its creators that it has now begun to catch on with scientists.

That said, it isn’t for everyone. Bezanson says it’s not exactly ideal for building desktop applications or operating systems, and though you can use it for web programming, it’s better suited to technical computing. But it’s still evolving, and according to Jonah Bloch-Johnson, a climate scientist at the University of Chicago who has been experimenting with Julia, it’s more robust than he expected. He says most of what he needs is already available in the language, and some of the code libraries, he adds, are better than what he can get from a seasoned language like Python.

Even if Julia never displaces the more popular languages — or if something better comes along — the team believes it’s changing the way people think about language design. It’s showing the world that one language can give you everything.

“People have assumed that we need both fast and slow languages,” Bezanson says. “I happen to believe that we don’t need slow languages.”

"Don't Reinvent the Wheel! Use a Framework!" They All Say. | mogosselin.com

$
0
0

Comments:""Don't Reinvent the Wheel! Use a Framework!" They All Say. | mogosselin.com"

URL:http://www.mogosselin.com/dont-reinvent-wheel-use-a-framework/


I see it more and more. In tutorials, on Youtube, blog posts, StackOverflow answers. If you want to fix something or develop a particular feature, you’ll probably stumble on “download that framework, take this library, install this, click there” type of tutorial.

But where’s vanilla JavaScript? In exile somewhere? Buried in JQuery’s basement? And since when is WordPress a framework to develop Web applications?

Oh, and by the way, if I want to develop a simple Web application with a single form in Java, I probably don’t want to install Spring, Spring MVC and Hibernate.

OK?!

</rant>

Is WordPress a Framework?

First, it doesn’t seem really clear what the differences between frameworks, libraries and Web development platforms are. Here’s a little list to help you understand if you’re just starting:

  • Library: Code written to create shortcuts over another language. Examples: JQuery for JavaScript, Apache commons for Java.
  • Web frameworks: Starting “tools” helping you with common problems. For instance, in Web development, we often need to map URLs to code somewhere. A framework will make this easier (hopefully). Example of Web frameworks: Laravel and Code Igniter for PHP. Spring MVC and Struts for JAVA.
  • Web development platforms (and similar): A more general category. In this one I include every application with a content management system (CMS), out of the box features for users like a forum, blog, or anything higher level. Example of Web development platforms: WordPress or Joomla for PHP. Liferay or Magnolia for Java. Orchard for C#.

Why “You Shouldn’t Reinvent the Wheel” Doesn’t Apply to Frameworks

So, what’s the problem with using a framework/library/platform you ask? You shouldn’t reinvent the wheel, right?

You know what, that’s a bad analogy. It just doesn’t work, it’s too simple to define the use of frameworks or CMS/CRM or libraries this way. Web developers are not using wheels, they are making cars.

If you are building cars, yes, you can reuse wheels IF you understand how they work in different situations and with different kinds of cars. Otherwise, if your car doesn’t work the way you want, how can you fix it? Kick the tires?

Sure, for the people working in Web development since 10 years, we can manage. Depending on our needs, we’ll know how to find vanilla JavaScript code without JQuery if we don’t need that big library. And if we don’t find anything? We’ll write the code ourselves

The problem is to always tell beginners (or let them think this is a good idea) to start with a framework / cms because “you shouldn’t reinvent the wheel”.

Should Beginners Use a Framework for Web Development?

Here’s when beginners shouldn’t use code written by somebody else:

  • When they want to learn a programming language
  • When they’re paid to do a job and don’t understand how the framework/library really works

When they should use a framework:

Problems that Arise When Using Frameworks

It’s nice not “reinventing the wheel”, but it’s not always 100% happiness.

Here are some problems we all come across while using frameworks, libraries and Web development platforms:

1. Customization can be difficult in complex code.

Take for instance Liferay. Liferay is a big web development platform. If you want to start a membership site and don’t care about how the “out of the box features” are working, it’s nice. If you can compromise on the look and feel, no problems. Liferay has everything: a forum, blog, user management, content management system. The list is endless. But, if you want to customize it a little bit, it can be cumbersome.

According to Ohloh Liferay has 4.1 million lines of code. If you want to change something, it’s somewhere in there.

Often, while using Liferay, I had to go look into the sources to understand what it was doing. If you don’t know the basics of Java, you won’t be able to do much.

The more complex the framework, the harder it is to customize it.

2. Debugging is usually more difficult when using frameworks.

Frameworks are good at throwing unreadable meaningless errors. I’m not saying that it always happens, but when it does it’s annoying to debug. If you’re a beginner and you don’t even understand the language, how can you fix those problems?

What’s Good About Frameworks?

Good things happen while using frameworks too:

1. Good frameworks will enforce good development practices.

If you’re starting out, choosing a framework to experiment with can be a good idea. You’ll be able to learn about design patterns such as MVC. I would advise to use lightweight frameworks and staying away from libraries and development platforms if you’re just starting.

2. A good example (most of the time)

Good open source frameworks will show you how code should be written. If you can choose a framework, chose one that is open source and browse its code. You should be able to pick some tricks here and there.

3. Less bugs and tested code

It’s especially true with libraries. The code you’re going to use, if picked carefully, will already be tested and will contain less bugs than what somebody would code from scratch.

4. Reusing code

Isn’t it nice not reinventing the wheel?

So, what’s your opinion on using frameworks, libraries and Web development platforms?

Wheel picture by ginnerobot

Where LISP Fits - adereth

$
0
0

Comments:"Where LISP Fits - adereth"

URL:http://adereth.github.io/blog/2014/02/03/where-lisp-fits/


There are a lot of great essays about the power and joy of LISP. I had read a bunch of them, but none convinced me to actually put the energy in to make it over those parentheses-shaped speed bumps. A part of me always wanted to, mostly because I’m convinced that our inevitable robot overlords will have to be programs that write programs and everything I had heard made me think that this would likely be done in a LISP. It just makes sense to be prepared.

Almost two years ago, a coworker showed me some gorgeous code that used Clojure’s thrush macro and I fell in love. I found myself jonesing for C-x C-e whenever I tried going back to Java. I devoured Programming Clojure, then The Joy of Clojure. In search of a purer hit, I turned to the source: McCarthy’s original paper on LISP. After reading it, I realized what someone could have told me that would have convinced me to invest the time 12 years earlier.

There’s a lot of interesting stuff in that paper, but what really struck me was that it felt like it fit into a theoretical framework that I thought I already knew reasonably well. This post isn’t about the power of LISP, which has been covered by others better than I could. Rather, it’s about where LISP fits in the world of computation.

None of what I’m about to say is novel or rigorous. I’m pretty sure that all the novel and rigorous stuff around this topic is 50 – 75 years old, but I just wasn’t exposed to it as directly as I’m going to try and lay out.

The Automaton Model of Computation

One of my favorite classes in school was 15-453: Formal Languages, Automata, and Computation, which used Sipser’s Introduction to the Theory of Computation:

One aspect that I really enjoyed was that there was a narrative; we started with Finite State Automata (FSA), analyzed the additional power of Pushdown Automata (PDA), and saw it culminate in Turing Machines (TM). Each of these models look very similar and have a natural connection: they are each just state machines with different types of external memory.

The tape in the Turing Machine can be viewed as two stacks, with one stack representing everything to the left of the current position and the other stack as the current position and everything to the right. With this model, we can view the computational hierarchy (FSA –> PDA –> TM) as just state machines with 0, 1, or 2 stacks. I think it’s quite an elegant representation and it makes the progression seem quite natural.

A key insight along the journey is that these machines are equivalent in power to other useful systems. A sizable section in the chapter on Finite State Automata is dedicated to their equivalence with Regular Expressions (RegEx). Context Free Grammars (CFG) are actually introduced before Pushdown Automata. But when we get to Turing Machines, there’s nothing but a couple paragraphs in a section called “Equivalence with Other Models”, which says:

Many [languages], such as Pascal and LISP, look quite different from one another in style and structure. Can some algorithm be programmed in one of them and not the others? Of course not — we can compile LISP into Pascal and Pascal into LISP, which means that the two languages describe exactly the same class of algorithms. So do all other reasonable programming languages.

The book and class leave it at that and proceed onto the limits of computability, which is the real point of the material. But there’s a natural question that isn’t presented in the book and which I never thought to ask:

Finite State Automata Regular Expressions Pushdown Automata Context Free Grammars Turing Machines ?

While we know that there are many models that equal Turing Machines, we could also construct other models that equal FSAs or PDAs. Why are RegExs and CFGs used as the parallel models of computation? With the machine model, we were able to just add a stack to move up at each level – is there a natural connection between RegExs and CFGs that we extrapolate to find their next level that is Turing equivalent?

The Chomsky-Schützenberger Hierarchy

It turns out that the answers to these questions were well covered in the 1950’s by the Chomsky-Schützenberger Hierarchy of Formal Grammars.

The left-hand side of the relations above are the automaton-based models and the right-hand side are the language-based models. The language models are all implemented as production rules, where some symbols are converted to other symbols. The different levels of computation just have different restrictions on what kind of replacements rules are allowed.

For instance RegExs are all rules of the form $A \to a$ and $A \to aB$, where the uppercase letters are non-terminal symbols and the lowercase are terminal. In CFGs, some of the restrictions on the right-hand side are lifted. Allowing terminals to appear on the left-hand side lets us make rules that are conditional on what has already been replaced, which appropriately gets called “Context Sensitive Grammars.” Finally, when all the rules are lifted, we get Recursively Enumerable languages, which are Turing equivalent. The Wikipedia page for the hierarchy and the respective levels is a good source for learning more.

When you look at the definition of LISP in McCarthy’s paper, it’s much closer to being an applied version of Chomsky’s style than Turing’s. This isn’t surprising, given that they were contemporaries at MIT. In McCarthy’s History of Lisp, he expicitly states that making a usable version of this other side was his goal:

These simplifications made LISP into a way of describing computable functions much neater than the Turing machines or the general recursive definitions used in recursive function theory. The fact that Turing machines constitute an awkward programming language doesn’t much bother recursive function theorists, because they almost never have any reason to write particular recursive definitions, since the theory concerns recursive functions in general. They often have reason to prove that recursive functions with specific properties exist, but this can be done by an informal argument without having to write them down explicitly. In the early days of computing, some people developed programming languages based on Turing machines; perhaps it seemed more scientific. Anyway, I decided to write a paper describing LISP both as a programming language and as a formalism for doing recursive function theory.

Here we have it straight from the source. McCarthy was trying to capture the power of recursive definitions in a usable form. Just like the automata theorists, once the linguists theorist hit Turing completeness, they focused on the limits instead of the usage.

Theoreticians are more interested in the equality of the systems than the usability, but as practitioners we know that it matters that some problems are more readily solvable in different representations. Sometimes it’s more appropriate to use a RegEx and sometimes an FSA is better suited, even though you could apply either. While nobody is busting out the Turing Machine to tackle real-world problems, some of our languages are more influenced by one side or the other.

Turing Machines Considered Harmful

If you track back the imperative/functional divide to Turing Machines and Chomsky’s forms, some of the roots are showing. Turing Machines are conducive to a couple things that are considered harmful in larger systems: GOTO-based1 and mutation-centric2 thinking. In a lot of cases, we’re finding that the languages influenced by the language-side are better suited for our problems. Paul Graham argues that the popular languages have been steadily evolving towards the LISPy side.

Anyway, this is a connection that I wish I had been shown at the peak of my interest in automata theory because it would have gotten me a lot more excited about LISP sooner. I think it’s interesting to look at LISP as something that has the same theoretical underpinnings as these other tools (RegEx and CFG) that we already acknowledged as vital.

Thanks to Jason Liszka and my colleagues at Two Sigma for help with this post!

Judges Poised to Hand U.S. Spies the Keys to the Internet | Threat Level | Wired.com

$
0
0

Comments:"Judges Poised to Hand U.S. Spies the Keys to the Internet | Threat Level | Wired.com"

URL:http://www.wired.com/threatlevel/2014/02/courtint/


How does the NSA get the private crypto keys that allow it to bulk eavesdrop on some email providers and social networking sites? It’s one of the mysteries yet unanswered by the Edward Snowden leaks. But we know that so-called SSL keys are prized by the NSA – understandably, since one tiny 256 byte key can expose millions of people to intelligence collection. And we know that the agency has a specialized group that collects such keys by hook or by crook. That’s about it.

Which is why the appellate court challenge pitting encrypted email provider Lavabit against the Justice Department is so important: It’s the only publicly documented case where a district judge has ordered an internet company to hand over its SSL key to the U.S. government — in this case, the FBI.

If the practice — which may well have happened in secret before — is given the imprimatur of the U.S. 4th Circuit Court of Appeals, it opens a new avenue for U.S. spies to expand their surveillance against users of U.S. internet services like Gmail and Dropbox. Since the FBI is known to work hand in hand with intelligence agencies, it potentially turns the judiciary into an arm of the NSA’s Key Recovery Service. Call it COURTINT.

Oral arguments in the Lavabit appeal were heard by a three-judge panel in Richmond, Virginia last week. The audio (.mp3) is available online (and PC World covered it from the courtroom). It’s clear that the judges weren’t much interested in the full implications of Lavabit’s crypto key breach, which one of the judges termed “a red herring.”

“My fear is that they won’t address the substantive argument about whether the government can get these keys,” Lavabit founder Ladar Levison told WIRED after the hearing.

The case began in June, when Texas-based Lavabit was served with a “pen register” order requiring it to give the government a live feed of the email activity on a particular account. The feed would include metadata like the “from” and “to” lines on every message, and the IP addresses used to access the mailbox.

Because pen register orders provide only metadata, they can be obtained without probable cause that the target has committed a crime. But in this case the court filings suggest strongly that the target was indicted NSA-leaker Edward Snowden, Lavabit’s most famous user.

Levison resisted the order on the grounds that he couldn’t comply without reprogramming the elaborate encryption system he’d built to protect his users’ privacy. He eventually relented and offered to gather up the email metadata and transmit it to the government after 60 days. Later he offered to engineer a faster solution. But by then, weeks had passed, and the FBI was determined to get what it wanted directly and in real time.

So in July it served Levison with a search warrant striking at the Achilles heel of his system: the private SSL key that would allow the FBI to decrypt traffic to and from the site, and collect Snowden’s metadata directly. The government promised it wouldn’t use the key to spy on Lavabit’s other 400,000 users, which the key would technically enable them to do.

The FBI attached a Carnivore-like monitoring system at Lavabit’s upstream provider in anticipation of getting the key, but Levison continued to resist, and even flew from Texas to Virginia to unsuccessfully challenge the order before U.S. District Judge Claude Hilton.

Levison turned over the keys as a nearly illegible computer printout in 4-point type. In early August, Hilton – who once served on the top-secret FISA court – ordered Levison again to provide them in the industry-standard electronic format, and began fining him $5,000 a day for noncompliance. After two days, Levison complied, but then immediately shuttered Lavabit altogether. Levison is appealing the contempt order.

The SSL key is a small file of inestimable importance for the integrity of a website and the privacy of its users. In the wrong hands, it would allow malefactors to impersonate a website, or, more relevantly in this case, permit snoops to eavesdrop on traffic to and from the site. Levison says he was concerned that once the government had his SSL key, it would obtain more secret warrants to spy on his users, and he would have no opportunity to review or potentially challenge those warrants.

“The problem I had is that the government’s interpretation of what’s legal and what isn’t is currently at its apex, in terms of authority and scope,” Levison says. “My concern is that they could get a warrant – maybe a classified warrant – that I wouldn’t even have knowledge of, much less the opportunity to object to … My responsibility was to ensure that everybody else’s privacy was protected.”

That was Levison’s thinking even before Snowden’s revelations showed us how pervasive and ambitious the NSA’s internet monitoring has become.

The judges in last week’s 4th Circuit hearing, though, weren’t interested in hearing about encryption keys. At one point, Judge Paul Niemeyer apologetically interrupted Levison’s attorney as soon as raised the subject, and made it clear that he accepted the government’s position that the FBI was only going to use the key to spy on the user targeted by the pen register order.

“The encryption key comes in only after your client is refusing to give them the unencrypted data,” Niemeyer said. “They don’t want the key as an object. They want this data with respect to a target that they’re investigating. And it seems to me that that’s all this case is about and its been blown out of proportion by all these contentions that the government is seeking keys to access others people’s data and so forth.”

“There was never an order to provide keys until later on, when [Levison] resisted,” Niemeyer added later in the hearing. “Even then, the government was authorized to use the key only with respect to a particular target.”

On that last point, Judge Niemeyer is mistaken. Neither the July 16 search warrant nor the August 5 order imposing sanctions placed any restrictions on what the government could do with the key. Without such a protective order, there are no barriers to the FBI handing the key over to the NSA, says a former senior Justice Department attorney, speaking to WIRED on condition of anonymity.

“You sometimes see limitations, or what’s referred to as minimization procedures: The government can only use this for the following purpose. There’s nothing like that here,” says the former official. “I’d say this is a very broad order. Nothing in it would prevent the government from sharing that key with intelligence services.”

Lavabit Orders (PDF) Lavabit Orders (Text)

The FBI’s relationship with the NSA is close – the FBI receives 1,000 tips a year from the NSA’s bulk telephone metadata collection; the bureau’s Data Intercept Technology Unit in Quantico, Virginia channels PRISM data to NSA headquarters in Ft. Meade from Silicon Valley. Presumably the two agencies are even closer on the matter that brought the FBI to Lavabit.

By shutting down Lavabit, Levison obviously thwarted prospective surveillance efforts. But we know – again, thanks to Snowden – that the agency sometimes collects encrypted data that it can’t crack, in the hope of getting the key later.

“We know from the minimization rules that are out that if they collect encrypted information they’re allowed to keep it indefinitely,” says Jennifer Granick, Director of Civil Liberties at the Stanford Center for Internet and Society. “That’s exactly why the Lavabit case is so important.”

If NSA did collect Lavabit traffic,  users who checked their email using Safari or Internet Explorer are theoretically compromised now. That’s because Lavabit failed to preference the full suite of encryption algorithms that provide “perfect forward secrecy,” which generates a temporary key for every session, making both passive eavesdropping and retrospective cryptanalysis unlikely. Firefox and Chrome users should not be similarly vulnerable.

If it wasn’t collecting Lavabit traffic already, it’s safe to assume the NSA began doing so when Snowden revealed himself as the NSA leaker in early June.

The NSA could not legally target U.S. citizens or legal residents without first getting a specific warrant from the Foreign Intelligence Surveillance Court. But non-U.S. Lavabit users would be fair game.

Levison flew back to Texas on Friday to await the 4th Circuit’s ruling and continue work on his new initiative: a surveillance-resistant email infrastructure called Dark Mail. He notes that one possible – even likely – outcome of the case is that the appeals court rules against him on a technicality. Some of his lawyer’s arguments weren’t clearly raised below in front of Judge Hilton. The court could find that those arguments are forfeit now, and leave the substantive issues undecided.

Pragmatically, that could be the best outcome, given the panel’s hostility to the encryption question and its faith in the government’s honesty. But Levison would prefer to lose on the substantive issue and continue the fight all the way to the Supreme Court. If the 4th Circuit doesn’t decide one way or the other, other U.S. internet companies won’t know where they stand when the government comes for their keys. The cloud of distrust that’s gathered over U.S. companies in the contrail of the NSA revelations will grow even darker.

“It’ll leave this issue completely in limbo, with no end in sight,” Levison says. “So how is the industry going to handle that? They’ll have to wait years for somebody else to come along who’s willing to stand up and say, ‘no,’ and take the government back to court.”

Editorial: Why Games Should Enter The Public Domain | Rock, Paper, Shotgun

$
0
0

Comments:"Editorial: Why Games Should Enter The Public Domain | Rock, Paper, Shotgun"

URL:http://www.rockpapershotgun.com/2014/02/03/editorial-why-games-should-enter-the-public-domain/


By John Walker on February 3rd, 2014 at 1:00 pm.

A few days ago I inadvertently caused a bit of a fuss. In writing about GOG’s Time Machine sale, I expressed my two minds about the joy of older games being rescued from obscurity, and my desire that they be in the public domain. This led to some really superb discussion about the subject in the comments below, and indeed to a major developer on Twitter to call for me to be fired.

I wanted to expand on my thoughts, rather than leave them as a throwaway musing on a post about a website’s sale. But I also want to stress that these are my thoughts-in-process, and not those of RPS’s hivemind. This isn’t a petition – it’s an exploration of my thoughts on the subject. Let’s keep that in mind as we decide whether I should indeed fire myself.

I said it frustrates me that games more than a couple of decades old aren’t entering the public domain. Twenty years was a fairly arbitrary number, one that seems to make sense in the context of games’ lives, but it could be twenty-five, thirty. It’s not the point here. My point was, and is, that I have a desire for artistic creations to more quickly (indeed, at all) be released into the public domain, after a significant period of time during which the creator can profit.

And it was this that caused 3D Realms’ George Broussard – a man directly involved in the story of Duke Nukem Forever – to say that it, “starts with the stupidest sentence I’ve ever read.”

This "article" at RPS starts with the stupidest sentence I've ever read. So god damned stupid. http://t.co/ECEroylTN8 — George Broussard (@georgeb3dr) January 29, 2014

Allowing myself to publish that, said George, should mean that I fire me.

@wickerwaka The whole thing, really. But especially that. Whoever allowed that to be printed should be fired. — George Broussard (@georgeb3dr) January 29, 2014

It does seem a strong reaction. But not a unique one. Cliffy “Cliffy B” B retweeted Broussard’s thoughts, and went on to say,

I'll never get over the culture that doesn't understand that developers need to eat and have mortgages and that games cost money to make. — Cliff Bleszinski (@therealcliffyb) January 29, 2014

Oh dear.

So before we move on to the nuances of the argument, let’s get one thing out of the way: Expressing a desire for a game to enter the public domain, let’s say twenty years after publication, does not in any sense whatsoever suggest a desire for developers to not get paid. I resent having to type this. It’s a bit like finding yourself having to say that you’re not in favour of gruesomely starving children to death because you expressed a thought that they probably shouldn’t get to exclusively eat at McDonald’s. What I am in fact saying is: “developers should get paid for the work they do, and then keep getting paid for the same bit of work, over and over and over for the next twenty years, even though they stopped doing any work related to it many years ago.” It’s not entirely apparent how the two sentiments are being confused.

Well, it is, actually – I’m being facetious. The two are being deliberately conflated by a contingent who find the possibility of cultural artifacts ever returning to the culture that spawned them to be so repellent that they must eliminate anything that treads even close to challenging what they see as their perpetual rights to profit from ancient work. (And let’s be clear here – creators from the tumescent Phil Collins to our very own Broussard are arguing for perpetual copyright here, far outreaching even the current grasp of the law.)

@axikal @worthplaying @therealcliffyb Insanity. Creators have a right to be paid indefinitely on their work if market is there. Period. — George Broussard (@georgeb3dr) January 30, 2014

I think the best approach here is to address the most frequent questions directly:

People need a financial incentive to create. If you take that away, it will harm creativity.

I think this argument is so astronomically false that my hat flies clean off my head when I read it. It’s so ghastly, so gruesomely inaccurate, such a wretched perspective of humans – these wonderful creatures so extraordinarily bursting with creative potential – and it makes me want to weep. The idea that creativity is only feasible if there’s a financial reward is abundantly demonstrably false. For someone to make their living from creative pursuits relies on some sort of financial return, yes. Creativity is not dependent on its being one’s living. That’s enormously crucial to remember. But even when talking about those seeking to make their living, the notion that a finite stretch of time in which exclusive profits can be made doesn’t prevent anyone from becoming a multi-millionaire from their work. An eventual transition to the public domain would in no sense take away the financial incentive to create.

And not only does an argument for a more imminent end to copyright periods than the current monstrosities like “life plus 70 years” not inhibit someone from making a living from their creative works, but it also doesn’t even mean they couldn’t continue making a living from the creative works they produced after the copyrights have expired – that’s the magic of Public Domain! They just then share the ability to profit from those works with others. I’m going to get into this far more deeply below.

While it might well stop Cliff Richard from being able to replace all the chandeliers in his mansions with money made from a song he recorded sixty years ago and hasn’t touched since, the potential of entry to the public domain is not going to make anyone poor. And I’m perfectly okay with Cliff’s dusty decor, not least because at the time of his recording said song, he would have agreed to that song’s entering the public domain by now.

But why shouldn’t someone get to own their own ideas? They created them, after all.

This is where things get a bit philosophical/metaphysical. But it comes down to accepting that there is a material difference (literally) between a game and a table, a song and a car. One physically exists. The other doesn’t. One is a thing, the other is an idea. And ideas is what this is all about.

Everyone has experienced the dribble-chinned tedium of various copyright industries screeching, “BUT YOU WOULDN’T STEAL A CAR!”* at us, as we sit in the cinema to watch a film while being told about how it’s our fault that no one’s sitting in a cinema watching a film, or indeed as we sit back to enjoy our legally purchased DVD. The comparison is false. And it’s a false comparison that it’s very much in the interests of the copyright industries to have us conflate. No, I would no more steal a car than I would tolerate a company telling me that they had the exclusive rights to the idea of cars themselves. However, there are things I’m very happy to ‘steal’, like knowledge, inspiration, or good ideas. And it was until incredibly recently that amongst such things as knowledge, inspiration and good ideas were the likes of literature and music.

The war for minds waged by the copyright industries over the last one hundred years has been so gruesomely effective that now the very suggestion that ideas are not immediately comparable with physical objects is met with violent anger. In a world where everyone alive has been raised in a system where Disney can pick the laws, it is perhaps not surprising that such contrary notions are met with such fury. What was once perceived to be a gross abuse is now ferociously defended by those abused by it.

Sudden changes occurred at the turn of the last century, where once ideas that were shared by oral and aural traditions, or indeed in copied texts, were confined to pieces of plastic. A couple of generations later, and these confinements were accepted as the only possibility. Then the relatively recent ubiquity of the internet has suddenly revealed this to be as transient and ethereal as it always was. However, vast industries had been built up around this temporary imprisoning of ideas, and they’re not all that delighted about their reign coming to its natural end.

Copyright has come full circle. Introduced in the 17th century as a form of censorship, an attempt by the monarchy to prevent the new-fangled printing press from being able to easily disseminate Protestant information, it was after a couple of hundred years eventually fenced into something vaguely useful. It stood to defend a creator’s right to protect their creations for a limited period, before they re-entered the public domain. Based in an understanding that creations are not uniquely birthed from the mind of a single individual, but rather the results of a massive collective sharing of cultural ideas over thousands of years, it made sense for their creation to be set free again at a later date. Those who found a demand for their creations, when they applied this shared culture to their own projects, would therefore receive recompense, either through patronage, or through payment for sales and performance. And they could (and can) continue to do so in perpetuity. Only, after that agreed period of time (different lengths in different nations), they would no longer exclusively own rights to that idea.

But now copyright seeks to protect individuals, not ideas. In fact, its purpose is to restrict the free flowing of ideas, to prevent cultural exchange, for the profit of the few. Copyright itself is the threat to future creativity, attempting to artificially restrict that most human of actions: sharing ideas. It has returned to its origins, and exists as a form of censorship. Not a censorship many are willing to recognise as such, so successful and endemic is the international brainwashing by the copyright industries, but the censorship of ideas all the same.

So why shouldn’t someone get to own ideas like they own a table? Because ideas don’t exist in an ownable form, are born of the shared cultural mass of humanity, and you can’t rest a coffee mug on an idea.

But why shouldn’t someone be allowed to continue profiting from their idea for as long as they’re alive?

Putting aside that an embracing of the public domain does not prevent someone from profiting from their idea, my response to this question is: why should they?

What I’ve found interesting about asking this question of people is that I’ve yet to receive an answer. I’m either told it’s on me to explain why they shouldn’t, as if I hadn’t just spent thousands of words doing that, or I’m told that they just should. I’ve noticed a complete unwillingness for people to stop and engage with the question. Why should someone get to profit from something they did fifty years ago? In what other walk of life would we willingly accept this as just a given? If a policeman demanded that he continue to be paid for having arrested a particular criminal thirty-five years ago, he’d be told to leave the room and stop being so silly. “But the prisoner is still in prison!” he’d cry, as he left the police station, his pockets out-turned, not having done any other work in the thirty-five years since and bemused as to why he wasn’t living in a castle.

What about the electrician who fitted the lighting in your house. He requires a fee every time you switch the lights on. It’s just the way things are. You have to pay it, because it’s always been that way, since you can remember. How can he be expected to live off just fitting new lights to other houses? And the surgeon’s royalties on that heart operation he did – that’s the system. Why shouldn’t he get paid every time you use it?

So why should a singer get to profit from a recording of his doing some work thirty-five years ago? The answer “because it’s his song” just isn’t good enough. It was PC Ironburns’ arrest. “But creating that song may have taken years!” PC Ironburns spent years investigating the crimes before he caught that pesky crim! The electrician had to study for years to become proficient enough to rig up lighting. The doctor spent seven years in medical school! Imagine if this system we wholly accept from creative industries were accepted elsewhere – the ensuing chaos would be extraordianry. Take Broussard’s claim above, that “Creatives have a right to be paid indefinitely on their work”, and switch out “Creatives” for any other job. “Dentists”, “teachers”, “librarians”, “palaeontologists”… It starts to appear a little ludicrous.

The answer, “Because they should” just doesn’t address the question. That instinctive response is one born of the capturing of culture by industry, bred into us from birth. To stop, shake it off, and approach the question anew takes considerable effort. But then once shaken, the light suddenly comes shining in.

Why do we, as people who likely make money by working a regular job and getting paid for the time we spend doing it, so vigorously defend this peculiar model that is the antithesis of our existence?

I can’t believe you’re arguing that developers shouldn’t be able to profit from their games.

My poor head. But yes, let’s bring this back to videogames. Games feel different from songs, even films, don’t they? They’re modern. They weren’t even a concept before copyright had so grotesquely morphed into its current form. The industry was born into a world where creators already assumed a life-long possession of their particular manipulation of the culture they’d received from others. It is, perhaps even more than film, music or literature, an industry that has grown up most at odds with the concept of the public domain. (Which anyone over the age of 30 will recognise as quite a grim irony, as they recall the days of public domain gaming in the early 90s.)

And unlike music, theatrical productions, or story, they never pre-existed in a plastic-free form. (We could of course argue about snakes & ladders, hopscotch and ‘it’, of course, but for the sake of simplicity, we won’t.) I accept that it’s perhaps a far bigger cultural shift to accept that the whizzy graphics and explosions are, when all is pulled apart, ethereal concepts, ideas of 1s and 0s bouncing off our retinas, as possible to hold in our hands as a memory of an aunt’s house. But as much as it may not instinctively feel like it, it remains entirely true.

But games, unlike some other creative pursuits, are often made by huge teams of people. While there may be a project lead, this isn’t like a book’s author. This is a company. People getting paid to do their job, to make a game. The rights to the game, the ownership, lies with the publisher that funds it, not the creatives who create it. When a 20, 30 year old game is still being charged for, not a single person who was involved in its creation is getting a dime.

When it is more like book with an author, an indie developer and their self-published project, then yes, there is a greater chance they’ll see the money. But then we return to the my larger, more significant argument: that after those decades of getting paid for it, it’s time to return it to culture.

But people who work deserve to get paid.

I’m being as patient as possible. And this is where reasonable copyright laws to protect creative pursuits can step in. Agreed standards within the culture from which the ideas were born where we bestow financial worth upon the action of a creator generating those creations. Because despite the question that is still bursting from some people’s minds about how I don’t want anyone to get paid, I ADORE to see creative people getting paid.

I even adore the idea of people getting paid for their work after copyrights have expired. Further, I absolutely believe that it is right and fair for anyone who works to make that public domain material available to me in a convenient form to be free to charge what they like for doing so. To those who interpreted my previous article as claim that GOG shouldn’t be able to charge for much older games, that’s entirely not the case. I’d just like GOG to be able to charge for their own work, and not to have to then include costs for the license they’re paying to whichever corporation owns the copyright on the game for which they had nothing to do with the creation.

You’re a hypocrite, because writing is a creative industry, and you don’t give all your writing away for free, and you get paid, and you’re ugly.

It’s polite to wait to find out if someone’s being hypocritical before calling them a hypocrite. However, despite there being little demand for videogame journalism written twenty years ago, and therefore not something I have to face too often, I do consider my older work to be in the public domain. I wrote for Future Publishing for about ten years, where my contract stated they had exclusive rights to the work for six months, and thereafter we shared rights to it in perpetuity. I have always immediately revoked any private rights to that Future work, and while I maintain the right to be recognised as the creator of the work, I’m delighted for anyone to use it in any fashion they see fit. If that person wanted to pay me for doing so, I’d be even further delighted. I believe in what I’m saying here.

So what do you want to see changed then, apart from developers not getting paid for their work?

There are very few cases of developers making their living from the profits of games made 20 years ago. Gaming, as a medium, has a far more rapid expiry date than music, film or any other of its contemporaries. Despite rich retro scenes, and dedicated emulator projects, getting an old game running at all can be quite the ordeal. Sites like GOG do a wonderful job of preserving old games and making them easy to run, but this doesn’t directly translate to astonishing sales that will keep the original developer in caviar-coated Jaguars for the rest of their lives – in fact, it’s phenomenally unlikely that a penny of most sales will reach the developers at all. Other sites dedicated to getting forgotten games working again – abandonware, as it’s known – are fiercely threatened and shut down not by the creatives who designed the games, but by the company that bought the company that merged with the company that had the IP rights. And if you don’t like 20 years, because that’s the mid-90s, and it feels too dangerously close, then make it 30 years. Make it a sensible length of time that ensures that developers are richly rewarded for their efforts, and then it is released into the cultural wild – people’s to share, copy, remix or add to their own peculiar retro project’s catalogue. People who are, you know, actually doing some work to make it playable.

And no, of course I don’t believe that gaming should be treated differently from other media. I believe other media should be rapidly reigned in to the same standards, before we see the cultural wells dry and crack.

But hey, here’s a thing: I don’t have any power. My saying this, my believing in returning creativity to the pool from which it came, doesn’t mean anyone has to. Shocking news. I have no delusions that writing all this out is going to spark a world-wide revolution in copyright law. Again, stunning revelations. But what I do hope is that some people, an odd few, might connect to this in some way, and see fit to opt to let their games enter the public domain. Or commit to publish their games with a promise that after a certain amount of time, they will do so. Even opt to publish their games under Creative Commons copyleft licenses, in order to maintain all the legal rights and protections they need, without stifling the cultural world from which they so richly drew.

I’m a romantic.

And just in case, let’s do this one more time: I love it so much when talented people get handsomely rewarded for their great creative work. It brings joy to my heart when I see stories of the likes of Garry Newman or Marcus Persson becoming fantastically wealthy in response to their brilliant creations. Little makes me smile more broadly in an average day at work than reading the indie developer who’s reporting their game’s sales mean they can give up their day job and focus on what they love.

Further, I would so enormously love to see a situation arise where we can see truly patron-led creative funding, where gaming communities put forward their money so that creatives producing truly wonderful gaming projects can do so without the need for commercial success.

I want money flowing toward those whose talents warrant it like we’ve never seen before. I want developers to get paid.

*I do also wonder if it can be the most effective campaign against people who do steal cars.

Further reading:

Nick Mailer’s essential essay on IP and copyright.
Techdirt’s piece on what should be, but isn’t, in the public domain.
Boingboing’s article on Naomi Novik’s testimony to Congress on copyright and fair use.

It’s Time to Make OpenStreetMap Your Only Street Map | Steve Coast

$
0
0

Comments:"It’s Time to Make OpenStreetMap Your Only Street Map | Steve Coast"

URL:http://stevecoast.com/2014/01/30/its-time-to-make-openstreetmap-your-only-street-map/


Today at Telenav we’ve announced that we have acquired skobbler– an OpenStreetMap (OSM) navigation company based in Germany – for approximately $24 million. skobbler brings a super popular OSM navigation app and 80+ employees in Europe to Telenav, expanding our reach globally across many of our products, services and offices.

In case you aren’t familiar with it, OpenStreetMap is the worldwide wiki-map that anyone can edit. When I founded OSM nearly a decade ago, my vision was to create a map everyone could use and contribute to. No strings attached. I created OSM as a non-profit community project – no one owns it and none of the community members make money from editing it. It is built and managed by people just like you, updating their neighborhood maps from their phones and computers.

Current OSM map vs. Google Map of Sochi, Russia where the 2014 Olympic Games begin on Feb. 7
(Thanks to Alastair Coote)

Have others tried their hand at crowd-sourcing map data as well? Absolutely. Waze and Google – or, just Google now – provide similar mechanisms to improve their maps, based mostly on OSM’s innovations. With one big catch. It is very much their map. Not yours. (Just ask the developers who pay a lot of money to use it.)

OpenStreetMap is different. All of the quality data contributed is openly available – just like Wikipedia. So, anyone can download, experiment and play with it freely. It’s not locked up beyond your reach.

OSM is one of the world’s most active open and crowd-sourced projects with over 1.5 million registered editors (a number that has been doubling every year). It has grown exponentially faster than I could have ever imagined ten years ago. In fact, it has been a fantastic display map (map you can look at) for some time, mapped right down to trees and footpaths. We’ve seen many uses of OSM in that context, from mere pretty artifacts to stimulating visualizations. The quality of the map data has evolved so much that, in the past couple of years, developers like Foursquare, Pinterest and Uber have integrated OSM as a display map into their products (most likely as a way to get access to a more detailed map and to avoid those costly fees from Google).

Mountain terrain in Sochi, Russia where skiers and other athletes will compete.

Today, OSM is a repository of quality map data, with more coming in than going out. I want to change that. Now it is time to leapfrog the simple design use cases – the economically efficient background usage of the map. It’s time to take OSM and harness it for everyday navigation. That’s where the users are and where we can really make difference.

I’d like it to get OSM to seven billion contributors in the next year or two. The only real way to get there is to allow a significant amount of consumers to get their hands on the map. I want more mobile users to have the chance to navigate with it and provide feedback as they go. This feedback can be implicit in their GPS trails, or explicit in their feedback to us as they tell us where the map needs improvement.

Turn-by-turn navigation on our phones is the way most people in the world use maps today, and it takes incredible effort and work from companies like Telenav and skobbler to mold OSM in to something a consumer will get a thrill from using. That’s what we’re focused on: getting OSM in to the hands of the everyday person, so that it’s part of our daily lives.

While Wikipedia proved the crowd sourcing model, OpenStreetMap is about taking it to the next level, switching it into warp drive, turning up the volume, pressing ‘play’ and not looking back. Now it’s about closing the loop. It’s no longer about taking OSM data, filtering and massaging it in to a simple map to put pins on top of. It’s about solving real problems for users – how to get somewhere – and providing them with a great experience that they are inherently a part of, by fixing the map as they go. To make this work smoothly requires tremendous engineering effort, orders of magnitude beyond providing display maps. We, at Telenav, have taken on that challenge and I am personally extremely excited to be a part of the team that is going to make it happen.

For nearly ten years, OSM has had potential for developers and consumers, let’s switch it up and give it potential because of developers and consumers. While others have spent billions of dollars building unsustainable maps based on your contributions, OSM is free, easy and available to all.

The project is ready for you. Here is how you can contribute:

…and watch for OSM data and services coming to Scout, our award-winning consumer navigation offering, very soon.

It is time to make the switch: make OpenStreetMap your only street map.


Heroku XL

UX Crash Course: 31 Fundamentals | The Hipper Element

$
0
0

Comments:"UX Crash Course: 31 Fundamentals | The Hipper Element"

URL:http://thehipperelement.com/post/75476711614/ux-crash-course-31-fundamentals


My New Year’s Resolution for 2014 was to get more people started in User Experience (UX) Design. I posted one lesson every day in January, and thousands of people came to learn!

Below you will find links to all 31 daily lessons.

Basic UX Principles: How to get started

The following list isn’t everything you can learn in UX. It’s a quick overview, so you can go from zero-to-hero as quickly as possible. You will get a practical taste of all the big parts of UX, and a sense of where you need to learn more. The order of the lessons follows a real-life UX process (more or less) so you can apply these ideas as-you-go. Each lesson also stands alone, so feel free to bookmark them as a reference!

****

Introduction & Key Ideas

#01 — What is UX?

#02 — User Goals & Business Goals

#03 — The 5 Main Ingredients of UX

****

How to Understand Users

#04 — What is User Research?

#05 — How to Ask People Questions

#06 — Creating User Profiles

#07 — Designing for Devices

#08 — Design Patterns

****

Information Architecture

#09 — What is Information Architecture?

#10 — User Stories & Types of Information Architecture

#11 — What is a Wireframe?

****

Visual Design Principles

#12 — Visual Weight, Contrast & Depth

#13 — Colour

#14 — Repetition & Pattern-Breaking

#15 — Line Tension & Edge Tension

#16 — Alignment & Proximity

****

Functional Layout Design

#17 — Z-Pattern, F-Pattern, and Visual Hierarchy

#18 — Browsing vs. Searching vs. Discovery

#19 — Page Framework

#20 — The Fold, Images, & Headlines

#21 — The Axis of Interaction

#22 — Forms

#23 — Calls-to-Action, Instructions & Labels

#24 — Primary & Secondary Buttons

****

User Psychology

#25 — Conditioning

#26 — Persuasion

#27 — How Experience Changes Experience

****

Designing with Data

#28 — What is Data?

#29 — Summary Statistics

#30 — Graph Shapes

#31 — A/B Tests

****

If you know someone else who wants to learn UX, please share!

If this is how you discovered my blog, it doesn’t stop here! I post a lot of awesome shit about UX, design, persuasion, and human behaviour. It’s a lot easier than hunting for links on your own!

Comments? Questions? Concerns? Find me on Twitter.

The Plan | Nullify NSA!Nullify NSA!

$
0
0

Comments:"The Plan | Nullify NSA!Nullify NSA!"

URL:http://offnow.org/plan/


In 2006, it was reported that the NSA had maxed out capacity of the Baltimore-area power grid.  Insiders reported that

“The NSA is already unable to install some costly and sophisticated new equipment. At minimum, the problem could produce disruptions leading to outages and power surges. At worst, it could force a virtual shutdown of the agency.” August 6, 2006

In other words, the NSA has an Achilles heel.

WATER

To get around the physical limitation of the amount of power required to monitor virtually every piece of communication around the globe, the NSA started searching for new locations with their own power supplies.

The new Utah Data Center opening in Bluffdale was chosen due to the access to cheap utilities, primarily water.  The water-cooled supercomputers require 1.7 million gallons of water per day to function.

No water = No data center.

The water being provided to the Utah Data Center comes from a political subdivision of the state of Utah.

They have the ability to turn that water off.

The situation is the same at many other locations.  Read on for more details.

4TH AMENDMENT PROTECTION ACT

The model legislation (HERE), ready for introduction in any state, would ban a state (and all political subdivisions) from providing assistance or material support in any way with the NSA spying program.

This would include, but is not limited to:

  • Refusing to supply water or electricity from state or locally-owned or operated utilities
  • A ban on all law enforcement acceptance of information provided without warrant, by the NSA or it’s Special Operations Division (SOD)
  • Severe penalties for any corporations providing services for or on behalf of the state which fill the gap and provide the NSA the resources it requires to stay functional.

While the federal government would not be prevented from bringing in its own supplies, it’s not likely that they have the capacity to do so.

The states and local communities should simply turn it off.

LEGAL DOCTRINE  

The legal doctrine behind this is “anti-commandeering.”  It’s the principle that the federal government doesn’t have the authority to force the states (or local communities) to carry out federal laws, regulatory programs, and the like.  The Supreme Court affirmed this three times in recent years, the cases being: 1997 Printz, 2002 New York, 2012 Sebelius.   It also affirmed this doctrine in the 1842 Prigg case where states refused to assist the federal government in capturing and returning runaway slaves.

This is also consistent with what James Madison advised when writing about the Constitution in Federalist #46.  Among the four steps he advised as “powerful means” to oppose federal power was “a refusal to cooperate with officers of the Union.”

MANY LOCATIONS

It’s not just Utah.  The NSA is reliant on many states and local communities to provide the resources required to operate their spying programs.

In Texas, the new data center opening in San Antonio has its electricity provided exclusively by the the city-owned power company.  And the NSA was quite upfront about the fact that Texas was chosen because of its independent power grid.  The NSA is extremely concerned about basic utilities.

And states providing them don’t have to.

In Augusta Georgia, the “threat operations center” has its water and even sewage treatment provided by local government services.

There’s also NSA “data centers” or “listening posts” in Colorado, Washington, West Virginia, Tennessee, and Hawaii.  Each one is a unique circumstance where a multi-prong strategy can and will create roadblocks to implementation.

CORPORATIONS

While many locations rely on state or local governments to assist or directly provide badly-needed utilities, others partner closely with corporations to do so.

For example, in Augusta, Georgia, a partnership with Georgia Power (a subsidiary of the massive electric holding company in the US, the Southern Company), literally kept the lights on.

The local paper reported that “Before a partnership in 2006 with Georgia Power, outages were a regular occurrence on post, particularly during the summer, when heavy demands were placed on the system.”

UNIVERSITIES

The NSA has its tentacles deep into the youth as well, with heavy partnerships at Universities in all but 8 US states.  In late 2012, the NSA reported that there are now 166 universities in this program.  (see the full list here)

These “Centers of Academic Excellence” are not just a recruiting ground for future analysts in the massive spy centers around the country, they provide valuable research partnerships to bolster the NSA’s spying and data-collection capabilities.  Universities are often provided with funding, scholarships and other tools to expand research and recruitment.

LAW ENFORCEMENT

The NSA has often claimed to be engaging in such activities to protect you from “terrorists,” and many people have accepted this kind of personal intrusion with the belief that they were being kept safe.  But the fact is that their programs are much broader – by far.

The Special Operations Division (SOD) was a highly-secret federal unit which is passing information collected without warrant by the NSA to state and local law enforcement for the investigation of regular crimes – not terrorism-related at all.

STRATEGY

A multi-prong strategy is an absolute must when working to prevent the kind of 4th Amendment violations seen under the NSA spying program.

Currently, activists are engaged in the support of lawsuits from EFF and ACLU, and in support of Congressional legislation to limit or stop the NSA.  But waiting for these to play out positively is a dangerous game of chicken.

A recent vote in Congress which failed to defund the NSA spying program indicates that relying on them to stop the NSA isn’t enough.

By approaching the NSA on multiple fronts, it’s certainly possible to overwhelm them and make their programs too difficult or costly to carry out.  A program to Turn it Off and render the NSA’s spying program as good as null and void intersects in 5 main areas:

State legislation – passed in every state, banning cooperation, compliance, and law enforcement collaboration. Local Resolutions – passed in every possible county, city and town, supporting these principles and calling on the state to pass the 4th Amendment Protection Act Corporate Protests – and opposition to those corporations providing the resources needed to carry out the NSA spying program. Campus Actions – including both protests against NSA/University partnerships, and organizational and student government resolutions formally calling for an end to such partnerships. Environmental concerns – the waste of resources is massive, with millions of gallons of water being used every single day at just one NSA facility.

From Thoreau to Rosa Parks, and from Ghandi to you – a successful strategy to protect your liberties requires non-compliance and peaceful resistance.

And, as Rosa Parks proved, saying “NO!” can change the world.

Like this:

LikeLoading...

P versus NP Explained

$
0
0

Comments:"P versus NP Explained"

URL:http://www.danielmiessler.com/study/pvsnp/


If you spend time in the programming community you probably hear the term “P versus NP” rather frequently. Unfortunately, many with computer science degrees have a weak understanding of the concept—let alone those without any training on it.

So here it is, explained simply in a way you'll hopefully never forget.

P vs. NP P versus NP is the gap between being able to solve a difficult problem quickly and being able to verify the correctness of any given answer to that problem.

P and NP are two different kinds of problems. P problems are easy for computers to solve, and NP problems are easy for computers to check answers for, but extremely difficult for computers to solve.

All P problems are NP problems; that is, if it's easy for the computer to solve, it's easy to verify the solution. The P vs NP problem asks: within the class of NP problems, are there problems which are not P, that is, which are not easy for computers to solve?

The problem is that most real-world challenges are NP problems, not P problems. Let's look at a few examples:

  • A traveling salesman wants to visit 100 different cities by driving, starting and ending his trip at home. He has a limited supply of gasoline, so he can only drive a total of 10,000 kilometers. He wants to know if he can visit all of the cities without running out of gasoline.
  • A farmer wants to take 100 watermelons of different masses to the market. She needs to pack the watermelons into boxes. Each box can only hold 20 kilograms without breaking. The farmer needs to know if 10 boxes will be enough for her to carry all 100 watermelons to market.

All of these problems share a common characteristic that is the key to understanding the nature of P versus NP: In order to solve hard (NP) problems, you have to try all combinations.

So the question everyone's trying to answer is this: Is it possible to find an algorithm for the hard problems (NP problems) that is faster than checking every single possibility?

This is why the P versus NP problem is so interesting to people. If anyone were to solve it, it would potentially make very difficult problems very easy for computers to solve.

P vs. NP deals with the gap between computers being able to quickly solve problems vs. just being able to test proposed solutions for correctness. As such, the P vs. NP problem is the search for a way to solve problems that require the trying of millions, billions, or trillions of combinations without actually having to try each one. Solving this problem would have profound effects on computing, and therefore on our society.

[ 03.02.14: Cleaned up some of the explanation based on the possibility of confusion. ]

There is a class of NP problems that are NP-Complete, which means that if you solve them then you can use the same method to solve any other NP problem quickly. This is a highly simplified explanation designed to acquaint people with the concept. For a more complete exploration, check out the Wikipedia article or the numerous resources online.

If you’d like to connect or respond, please reach out via Twitter, using the comments below, or by email. Also consider subscribing to the site via RSS and checking out my other content.

Thank you for visiting.

Please enable JavaScript to view the comments powered by Disqus.blog comments powered by

Why we love Mozilla Persona. And why you should, too.

$
0
0

Comments:"Why we love Mozilla Persona. And why you should, too."

URL:http://blog.zonino.co.uk/why-we-love-mozilla-persona-and-why-you-should-too/


In a nutshell, Persona is great for us because it means that we don't have to store anyone's passwords and nobody has to create a new account to log in to Zonino.

Persona makes signing in really easy for developers and users. You can reuse your Persona account on many websites. And you don't even need to create a new password if you're using gmail or yahoo mail thanks to Identity Bridging. If you use a different email provider that's ok, too; Persona will ask you to create a password that you can reuse on all Persona enabled websites. The nice thing here is that none of those websites need to store that password - because Persona vouches for you when you sign in. Really neat, no more password leaking.

Now Mozilla is by no means a perfect organisation that builds perfect products. A good example from Aral Balkan at the January Hacker News London Meetup is that the initial date setup for Firefox OS defaults to 1980. That's just silly. But such minor UX indescretions are just that: Minor. That Mozilla is a not-for-profit organistion and improving the web platform is their core mission garners an amount of kudos that the other big players in their space can only dream of. They have a plinth in San Francisco honouring contributors to Firefox. Our buddy Pedro is on there. That's just awesome.

We don't know your password. Google doesn't know you're signing in to Zonino. That's cool.

Importantly, unlike signing in using your Facebook, Google or similar account directly, you don't need to give up any of your private information just to sign in. And even more importantly, if say you sign in using Gmail Identity Bridging, Google doesn't get to know which websites you're signing in to - they only see Persona is checking someones identity.

We think that Persona is a great attempt at improving usability, security and privacy when it comes to managing logins and passwords on the web and we're excited to try it out here at Zonino.

Read more here

And also here

Now perhaps this post is a little less playful than usual but I make no apologies. We're talking about privacy here - and we take that shit seriously.

Viewing all 9433 articles
Browse latest View live