Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Fig | Fast, isolated development environments using Docker

$
0
0

Comments:"Fig | Fast, isolated development environments using Docker"

URL:http://orchardup.github.io/fig/?


Fast, isolated development environments using Docker.

Define your app's environment with Docker so it can be reproduced anywhere:

FROM orchardup/python:2.7
ADD . /code
RUN pip install -r requirements.txt
WORKDIR /code
CMD python app.py

Define the services that make up your app so they can be run together in an isolated environment:

web:build:.links:-dbports:-8000:8000db:image:orchardup/postgresql

(No more installing Postgres on your laptop!)

Then type fig up, and Fig will start and run your entire app:

There are commands to:

  • start, stop and rebuild services
  • view the status of running services
  • tail running services' log output
  • run a one-off command on a service

Fig is a project from Orchard. Follow us on Twitter to keep up to date with Fig and other Docker news.

Quick start

Let's get a basic Python web app running on Fig. It assumes a little knowledge of Python, but the concepts should be clear if you're not familiar with it.

First, install Docker. If you're on OS X, you can use docker-osx:

$ curl https://raw.github.com/noplay/docker-osx/master/docker-osx > /usr/local/bin/docker-osx
$ chmod +x /usr/local/bin/docker-osx
$ docker-osx shell

Docker has guides for Ubuntu and other platforms in their documentation.

Next, install Fig:

$ sudo pip install -U fig

(This command also upgrades Fig when we release a new version. If you don’t have pip installed, try brew install python or apt-get install python-pip.)

You'll want to make a directory for the project:

$ mkdir figtest
$ cd figtest

Inside this directory, create app.py, a simple web app that uses the Flask framework and increments a value in Redis:

fromflaskimportFlaskfromredisimportRedisimportosapp=Flask(__name__)redis=Redis(host=os.environ.get('FIGTEST_REDIS_1_PORT_6379_TCP_ADDR'),port=int(os.environ.get('FIGTEST_REDIS_1_PORT_6379_TCP_PORT')))@app.route('/')defhello():redis.incr('hits')return'Hello World! I have been seen %s times.'%redis.get('hits')if__name__=="__main__":app.run(host="0.0.0.0",debug=True)

We define our Python dependencies in a file called requirements.txt:

And we define how to build this into a Docker image using a file called Dockerfile:

FROM stackbrew/ubuntu:13.10
RUN apt-get -qq update
RUN apt-get install -y python python-pip
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python app.py

That tells Docker to create an image with Python and Flask installed on it, run the command python app.py, and open port 5000 (the port that Flask listens on).

We then define a set of services using fig.yml:

web:
 build: .
 ports:
 - 5000:5000
 volumes:
 - .:/code
 links:
 - redis
redis:
 image: orchardup/redis

This defines two services:

  • web, which is built from Dockerfile in the current directory. It also says to forward the exposed port 5000 on the container to port 5000 on the host machine, connect up the Redis service, and mount the current directory inside the container so we can work on code without having to rebuild the image.
  • redis, which uses the public image orchardup/redis.

Now if we run fig up, it'll pull a Redis image, build an image for our own code, and start everything up:

$ fig up
Pulling image orchardup/redis...
Building web...
Starting figtest_redis_1...
Starting figtest_web_1...
figtest_redis_1 | [8] 02 Jan 18:43:35.576 # Server started, Redis version 2.8.3
figtest_web_1 | * Running on http://0.0.0.0:5000/

Open up http://localhost:5000 in your browser (or http://localdocker:5000 if you're using docker-osx) and you should see it running!

If you want to run your services in the background, you can pass the -d flag to fig up and use fig ps to see what is currently running:

$ fig up -d
Starting figtest_redis_1...
Starting figtest_web_1...
$ fig ps
 Name Command State Ports
-------------------------------------------------------------------
figtest_redis_1 /usr/local/bin/run Up
figtest_web_1 /bin/sh -c python app.py Up 5000->5000/tcp

fig run allows you to run one-off commands for your services. For example, to see what environment variables are available to the web service:

See fig --help other commands that are available.

If you started Fig with fig up -d, you'll probably want to stop your services once you've finished with them:

That's more-or-less how Fig works. See the reference section below for full details on the commands, configuration file and environment variables. If you have any thoughts or suggestions, open an issue on GitHub or email us.


Egor Homakov: Two "WontFix" vulnerabilities in Facebook Connect

$
0
0

Comments:"Egor Homakov: Two "WontFix" vulnerabilities in Facebook Connect"

URL:http://homakov.blogspot.com/2014/01/two-severe-wontfix-vulnerabilities-in.html??


TL;DR Every website with "Connect Facebook account and log in with it" is vulnerable to account hijacking. Every website relying on signed_request (for example official JS SDK) is vulnerable to account takeover, as soon as an attacker finds a 302 redirect to other domain.

I don't think these will be fixed, as I've heard from the Facebook team that it will break compatibility. I really wish they would fix it though as you can see below, I feel these are serious issues.

I understand the business reasons why they might choose so, but from my perspective when you have to choose between security and compatibility, the former is the right bet. Let me quickly describe what these bugs are and how you can protect your websites.

CSRF on facebook.com login to hijack your identity.
It's higher level Most-Common-OAuth-Vulnerability (we attached Attacker's Social Account to Victim's Client Account) but here even Clients using "state" to prevent CSRF are vulnerable.

<iframe name="playground" src='data:text/html,<form id="genform" action="https://www.facebook.com/login.php" method="POST"><input type="hidden" name="email" value="homakov@gmail.com"><input type="hidden" name="pass" value="password"></form><script>genform.submit()</script>'></iframe>

FYI we need data: trick to get rid of Referer header, Facebook rejects requests with cross domain Referers.

This form logs victim in attacker's arbitrary account (even if user is already logged in, logout procedure is trivial). Now to all OAuth flows Facebook will respond with Attacker's profile information and Attacker's uid.

Every website with "Connect your Facebook to main account to login faster" functionality is vulnerable to account hijacking as long as attacker can replace your identity on Facebook with his identity and connect their Facebook account to victim's account on the website just loading CLIENT/fb/connect URL.

Once again: even if we cannot inject our callback with our code because of state-protection, we can re-login user to make Facebook do all the work for us!

Almost all server-side libraries and implementations are "vulnerable" (they are not, it's Facebook who's vulnerable!) : omniauth, django-social-auth, etc. And yeah, official facebook-php-sdk.

(By the way, I found 2 bugs in omniauth-facebook: state fixation, authentication bypass. Update if you haven't yet.)

Mitigation: require CSRF token for adding a social connection. E.g. instead of /connect/facebook use /connect/facebook?authenticity_token=123qwe. It will make it impossible for an attacker to start the process by himself.

Facebook JS SDK and #signed_request
Since "redirect_uri" is flexible on Connect since its creation, Facebook engineers made it a required parameter to obtain "access_token" for issued "code". If the code was issued for a different (spoofed) redirect_uri, provider will respond with mismatch-error.

signed_request is special non-standard transport created by Facebook. It carries "code" as well, but this code is issued for an empty redirect_uri = "". Furthermore, signed_request is sent in a #fragment, so it can be leaked easily with any 302 redirect to attacker's domain.

And guess what — the redirect can even be on a subdomain. of our target! Attack surface gets so huge, no doubt you can find a redirecting endpoint on any big website.

Basically, signed_request is exactly what "code" flow is, but with Leak-protection turned off.

All you need is to steal victim's signed_request with a redirect to your domain (slice it from location.hash), then open the Client website, put it in the fbsr_CLIENT_ID cookie and hit client's authentication endpoint.

Finally, you're logged in as the owner of that signed_request. It's just like when you steal username+password.

Mitigation: it's hard to get rid from all the redirects. For example Facebook clients like soundcloud, songkick, foursquare are at the same time OAuth providers too, so they have to be able to redirect to 3rd party websites. Each redirect to their "sub" clients is also a threat to leak Facebook's token. Well, you can try to add #_=_ to "kill" fragment part..

It's better to stop using signed_request (get rid of JS SDK) and start using (slightly more) secure code-flow with protections I mentioned above.

Conclusion
In my opinion I'd recommend not using Facebook Connect in critical applications (nor with any other OAuth provider). Perhaps it's suitable quick login for a funny social game but never for a website with important data. Use oldschool passwords instead.

If you must use Facebook Connect, I recommend whitelisting your redirect_uri in app's settings and requiring user interaction (clicking some button) to start adding a new connection. I really hope Facebook will change their mind, to stay trustworthy identity provider.

maintenance - Why do dynamic languages make it more difficult to maintain large codebases? - Programmers Stack Exchange

$
0
0

Comments:"maintenance - Why do dynamic languages make it more difficult to maintain large codebases? - Programmers Stack Exchange"

URL:http://programmers.stackexchange.com/questions/221615/why-do-dynamic-languages-make-it-more-difficult-to-maintain-large-codebases/221658#221658


dynamic languages make for harder to maintain large codebases

Caveat: I have not watched the presentation.

I have been on the design committees for JavaScript (a very dynamic language), C# (a mostly static language) and Visual Basic (which is both static and dynamic), so I have a number of thoughts on this subject; too many to easily fit into an answer here.

Let me begin by saying that it is hard to maintain a large codebase, period. Big code is hard to write no matter what tools you have at your disposal. Your question does not imply that maintaining a large codebase in a statically-typed language is "easy"; rather the question presupposes merely that it is an even harder problem to maintain a large codebase in a dynamic language than in a static language. That said, there are reasons why the effort expended in maintaining a large codebase in a dynamic language is somewhat larger than the effort expended for statically typed languages. I'll explore a few of those in this post.

But we are getting ahead of ourselves. We should clearly define what we mean by a "dynamic" language; by "dynamic" language I mean the opposite of a "static" language.

A "statically-typed" language is a language designed to facilitate automatic correctness checking by a tool that has access to only the source code, not the running state of the program. The facts that are deduced by the tool are called "types". The language designers produce a set of rules about what makes a program "type safe", and the tool seeks to prove that the program follows those rules; if it does not then it produces a type error.

A "dynamically-typed" language by contrast is one not designed to facilitate this kind of checking. The meaning of the data stored in any particular location can only be easily determined by inspection while the program is running.

(We could also make a distinction between dynamically scoped and lexically scoped languages, but let's not go there for the purposes of this discussion. A dynamically typed language need not be dynamically scoped and a statically typed language need not be lexically scoped, but there is often a correlation between the two.)

So now that we have our terms straight let's talk about large codebases. Large codebases tend to have some common characteristics:

  • They are too large for any one person to understand every detail.
  • They are often worked on by large teams whose personnel changes over time.
  • They are often worked on for a long time, with multiple versions.

All these characteristics present impediments to understanding the code, and therefore present impediments to correctly changing the code. In short: time is money; making correct changes to a large codebase is expensive due to the nature of these impediments to understanding.

Since budgets are finite and we want to do as much as we can with the resources we have, the maintainers of large codebases seek to lower the cost of making correct changes by mitigating these impediments. Some of the ways that large teams mitigate these impediments are:

  • modularization: code is factored into "modules" of some sort where each module has a clear responsibility. The action of the code can be documented and understood without a user having to understand its implementation details.
  • encapsulation: modules make a distinction between their "public" surface area and their "private" implementation details so that the latter can be improved without affecting the correctness of the program as a whole.
  • re-use: when a problem is solved correctly once, it is solved for all time; the solution can be re-used in the creation of new solutions. Techniques such as making a library of utility functions, or making functionality in a base class that can be extended by a derived class, or architectures that encourage composition, are all techniques for code re-use. Again, the point is to lower costs.
  • annotation: code is annotated to describe the valid values that might go into a variable, for instance.
  • automatic detection of errors: a team working on a large program is wise to build a device which determines early when a programming error has been made and tells you about it so that it can be fixed quickly, before the error is compounded with more errors. Techniques such as writing a test suite, or running a static analyzer fall into this category.

A statically typed language is an example of the latter; you get in the compiler itself a device which looks for type errors and informs you of them before you check the broken code change into the repository. A manifestly typed language requires that storage locations be annotated with facts about what can go into them.

So for that reason alone, dynamically typed languages make it harder to maintain a large codebase, because the work that is done by the compiler "for free" is now work that you must do in the form of writing test suites. If you want to annotate the meaning of your variables, you must come up with a system for doing so, and if a new team member accidentally violates it, that must be caught in code review, not by the compiler.

Now here is the key point I have been building up to: there is a strong correlation between a language being dynamically typed and a language also lacking all the other facilities that make lowering the cost of maintaining a large codebase easier, and that is the key reason why it is more difficult to maintain a large codebase in a dynamic language. And similarly there is a correlation between a language being statically typed and having facilities that make programming in the larger easier.

Let's take JavaScript for example. (I worked on the original versions of JScript at Microsoft from 1996 through 2001.) The by-design purpose of JavaScript was to make the monkey dance when you moused over it. Scripts were often a single line. We considered ten line scripts to be pretty normal, hundred line scripts to be huge, and thousand line scripts were unheard of. The language was absolutely not designed for programming in the large, and our implementation decisions, performance targets, and so on, were based on that assumption.

Since JavaScript was specifically designed for programs where one person could see the whole thing on a single page, JavaScript is not only dynamically typed, but it also lacks a great many other facilities that are commonly used when programming in the large:

  • There is no modularization system; there are no classes, interfaces, or even namespaces. These elements are in other languages to help organize large codebases.
  • The inheritance system -- prototype inheritance -- is both weak and poorly understood. It is by no means obvious how to correctly build prototypes for deep hierarchies (a captain is a kind of pirate, a pirate is a kind of person, a person is a kind of thing...) in out-of-the-box JavaScript.
  • There is no encapsulation whatsoever; every property of every object is yielded up to the for-in construct, and is modifiable at will by any part of the program.
  • There is no way to annotate any restriction on storage; any variable may hold any value.

But it's not just the lack of facilities that make programming in the large easier. There are also features that make it harder.

  • JavaScript's error management system is designed with the assumption that the script is running on a web page, that failure is likely, that the cost of failure is low, and that the user who sees the failure is the person least able to fix it: the browser user, not the code's author. Therefore as many errors as possible fail silently and the program keeps trying to muddle on through. This is a reasonable characteristic given the goals of the language, but it surely makes programming in the larger harder because it increases the difficulty of writing test cases. If nothing ever fails it is harder to write tests that detect failure!

  • Code can modify itself based on user input via facilities such as eval or adding new script blocks to the browser DOM dynamically. Any static analysis tool might not even know what code makes up the program!

  • And so on.

Clearly it is possible to overcome these impediments and build a large program in JavaScript; many multiple-million-line JavaScript programs now exist. But the large teams who build those programs use tools and have discipline to overcome the impediments that JavaScript throws in your way:

  • They write test cases for every identifier ever used in the program. In a world where misspellings are silently ignored, this is necessary. This is a cost.
  • They write code in type-checked languages and compile that to JavaScript, such as TypeScript.
  • They use frameworks that encourage programming in a style more amenable to analysis, more amenable to modularization, and less likely to produce common errors.
  • They have good discipline about naming conventions, about division of responsibilities, about what the public surface of a given object is, and so on. Again, this is a cost; those tasks would be performed by a compiler in a typical statically-typed language.

In conclusion, it is not merely the dynamic nature of typing that increases the cost of maintaining a large codebase. That alone does increase costs, but that is far from the whole story. I could design you a language that was dynamically typed but also had namespaces, modules, inheritance, libraries, private members, and so on -- in fact, C# 4 is such a language -- and such a language would be both dynamic and highly suited for programming in the large.

Rather it is also everything else that is frequently missing from a dynamic language that increases costs in a large codebase. Dynamic languages which also include facilities for good testing, for modularization, reuse, encapsulation, and so on, can indeed decrease costs when programming in the large, but many frequently-used dynamic languages do not have these facilities built in. Someone has to build them, and that adds cost.

NSA and GCHQ target 'leaky' phone apps like Angry Birds to scoop user data | World news | theguardian.com

$
0
0

Comments:" NSA and GCHQ target 'leaky' phone apps like Angry Birds to scoop user data | World news | theguardian.com "

URL:http://www.theguardian.com/world/2014/jan/27/nsa-gchq-smartphone-app-angry-birds-personal-data


The National Security Agency and its UK counterpart GCHQ have been developing capabilities to take advantage of "leaky" smartphone apps, such as the wildly popular Angry Birds game, that transmit users' private information across the internet, according to top secret documents.

The data pouring onto communication networks from the new generation of iPhone and Android apps ranges from phone model and screen size to personal details such as age, gender and location. Some apps, the documents state, can share users' most sensitive information such as sexual orientation – and one app recorded in the material even sends specific sexual preferences such as whether or not the user may be a swinger.

Many smartphone owners will be unaware of the full extent this information is being shared across the internet, and even the most sophisticated would be unlikely to realise that all of it is available for the spy agencies to collect.

Dozens of classified documents, provided to the Guardian by whistleblower Edward Snowden and reported in partnership with the New York Times and ProPublica, detail the NSA and GCHQ efforts to piggyback on this commercial data collection for their own purposes.

Scooping up information the apps are sending about their users allows the agencies to collect large quantities of mobile phone data from their existing mass surveillance tools – such as cable taps, or from international mobile networks – rather than solely from hacking into individual mobile handsets.

Exploiting phone information and location is a high-priority effort for the intelligence agencies, as terrorists and other intelligence targets make substantial use of phones in planning and carrying out their activities, for example by using phones as triggering devices in conflict zones. The NSA has cumulatively spent more than $1bn in its phone targeting efforts.

The disclosures also reveal how much the shift towards smartphone browsing could benefit spy agencies' collection efforts.

A May 2010 NSA slide on the agency's 'perfect scenario' for obtaining data from mobile apps. Photograph: Guardian

One slide from a May 2010 NSA presentation on getting data from smartphones – breathlessly titled "Golden Nugget!" – sets out the agency's "perfect scenario": "Target uploading photo to a social media site taken with a mobile device. What can we get?"

The question is answered in the notes to the slide: from that event alone, the agency said it could obtain a "possible image", email selector, phone, buddy lists, and "a host of other social working data as well as location".

In practice, most major social media sites, such as Facebook and Twitter, strip photos of identifying location metadata (known as EXIF data) before publication. However, depending on when this is done during upload, such data may still, briefly, be available for collection by the agencies as it travels across the networks.

Depending on what profile information a user had supplied, the documents suggested, the agency would be able to collect almost every key detail of a user's life: including home country, current location (through geolocation), age, gender, zip code, martial status – options included "single", "married", "divorced", "swinger" and more – income, ethnicity, sexual orientation, education level, and number of children.

The agencies also made use of their mobile interception capabilities to collect location information in bulk, from Google and other mapping apps. One basic effort by GCHQ and the NSA was to build a database geolocating every mobile phone mast in the world – meaning that just by taking tower ID from a handset, location information could be gleaned.

A more sophisticated effort, though, relied on intercepting Google Maps queries made on smartphones, and using them to collect large volumes of location information.

So successful was this effort that one 2008 document noted that "[i]t effectively means that anyone using Google Maps on a smartphone is working in support of a GCHQ system."

The information generated by each app is chosen by its developers, or by the company that delivers an app's adverts. The documents do not detail whether the agencies actually collect the potentially sensitive details some apps are capable of storing or transmitting, but any such information would likely qualify as content, rather than metadata.

Data collected from smartphone apps is subject to the same laws and minimisation procedures as all other NSA activity – procedures that the US president, Barack Obama, suggested may be subject to reform in a speech 10 days ago. But the president focused largely on the NSA's collection of the metadata from US phone calls and made no mention in his address of the large amounts of data the agency collects from smartphone apps.

The latest disclosures could also add to mounting public concern about how the technology sector collects and uses information, especially for those outside the US, who enjoy fewer privacy protections than Americans. A January poll for the Washington Post showed 69% of US adults were already concerned about how tech companies such as Google used and stored their information.

The documents do not make it clear how much of the information that can be taken from apps is routinely collected, stored or searched, nor how many users may be affected. The NSA says it does not target Americans and its capabilities are deployed only against "valid foreign intelligence targets".

The documents do set out in great detail exactly how much information can be collected from widely popular apps. One document held on GCHQ's internal Wikipedia-style guide for staff details what can be collected from different apps. Though it uses Android apps for most of its examples, it suggests much of the same data could be taken from equivalent apps on iPhone or other platforms.

The GCHQ documents set out examples of what information can be extracted from different ad platforms, using perhaps the most popular mobile phone game of all time, Angry Birds – which has reportedly been downloaded more than 1.7bn times – as a case study.

From some app platforms, relatively limited, but identifying, information such as exact handset model, the unique ID of the handset, software version, and similar details are all that are transmitted.

Other apps choose to transmit much more data, meaning the agency could potentially net far more. One mobile ad platform, Millennial Media, appeared to offer particularly rich information. Millennial Media's website states it has partnered with Rovio on a special edition of Angry Birds; with Farmville maker Zynga; with Call of Duty developer Activision, and many other major franchises.

Rovio, the maker of Angry Birds, said it had no knowledge of any NSA or GCHQ programs looking to extract data from its apps users.

"Rovio doesn't have any previous knowledge of this matter, and have not been aware of such activity in 3rd party advertising networks," said Saara Bergström, Rovio's VP of marketing and communications. "Nor do we have any involvement with the organizations you mentioned [NSA and GCHQ]."

Millennial Media did not respond to a request for comment.

In December, the Washington Post reported on how the NSA could make use of advertising tracking files generated through normal internet browsing – known as cookies – from Google and others to get information on potential targets.

However, the richer personal data available to many apps, coupled with real-time geolocation, and the uniquely identifying handset information many apps transmit give the agencies a far richer data source than conventional web-tracking cookies.

Almost every major website uses cookies to serve targeted advertising and content, as well as streamline the experience for the user, for example by managing logins. One GCHQ document from 2010 notes that cookie data – which generally qualifies as metadata – has become just as important to the spies. In fact, the agencies were sweeping it up in such high volumes that their were struggling to store it.

"They are gathered in bulk, and are currently our single largest type of events," the document stated.

The ability to obtain targeted intelligence by hacking individual handsets has been well documented, both through several years of hacker conferences and previous NSA disclosures in Der Spiegel, and both the NSA and GCHQ have extensive tools ready to deploy against iPhone, Android and other phone platforms.

GCHQ's targeted tools against individual smartphones are named after characters in the TV series The Smurfs. An ability to make the phone's microphone 'hot', to listen in to conversations, is named "Nosey Smurf". High-precision geolocation is called "Tracker Smurf", power management – an ability to stealthily activate an a phone that is apparently turned off – is "Dreamy Smurf", while the spyware's self-hiding capabilities are codenamed "Paranoid Smurf".

Those capability names are set out in a much broader 2010 presentation that sheds light on spy agencies' aspirations for mobile phone interception, and that less-documented mass-collection abilities.

The cover sheet of the document sets out the team's aspirations:

The cover slide for a May 2010 GCHQ presentation on mobile phone data interception. Photograph: Guardian

Another slide details weak spots in where data flows from mobile phone network providers to the wider internet, where the agency attempts to intercept communications. These are locations either within a particular network, or international roaming exchanges (known as GRXs), where data from travellers roaming outside their home country is routed.

While GCHQ uses Android apps for most of its examples, it suggests much of the same data could be taken from iPhone apps. Photograph: GuardianGCHQ's targeted tools against individual smartphones are named after characters in the TV series The Smurfs. Photograph: Guardian

These are particularly useful to the agency as data is often only weakly encrypted on such networks, and includes extra information such as handset ID or mobile number – much stronger target identifiers than usual IP addresses or similar information left behind when PCs and laptops browse the internet.

The NSA said its phone interception techniques are only used against valid targets, and are subject to stringent legal safeguards.

"The communications of people who are not valid foreign intelligence targets are not of interest to the National Security Agency," said a spokeswoman in a statement.

"Any implication that NSA's foreign intelligence collection is focused on the smartphone or social media communications of everyday Americans is not true. Moreover, NSA does not profile everyday Americans as it carries out its foreign intelligence mission. We collect only those communications that we are authorized by law to collect for valid foreign intelligence and counterintelligence purposes – regardless of the technical means used by the targets.

"Because some data of US persons may at times be incidentally collected in NSA's lawful foreign intelligence mission, privacy protections for US persons exist across the entire process concerning the use, handling, retention, and dissemination of data. In addition, NSA actively works to remove extraneous data, to include that of innocent foreign citizens, as early as possible in the process.

"Continuous and selective publication of specific techniques and tools lawfully used by NSA to pursue legitimate foreign intelligence targets is detrimental to the security of the United States and our allies – and places at risk those we are sworn to protect."

The NSA declined to respond to a series of queries on how routinely capabilities against apps were deployed, or on the specific minimisation procedures used to prevent US citizens' information being stored through such measures.

GCHQ declined to comment on any of its specific programs, but stressed all of its activities were proportional and complied with UK law.

"It is a longstanding policy that we do not comment on intelligence matters," said a spokesman.

"Furthermore, all of GCHQ's work is carried out in accordance with a strict legal and policy framework that ensures that our activities are authorised, necessary and proportionate, and that there is rigorous oversight, including from the Secretary of State, the Interception and Intelligence Services Commissioners and the Parliamentary Intelligence and Security Committee. All our operational processes rigorously support this position."

• A separate disclosure on Wednesday, published by Glenn Greenwald and NBC News, gave examples of how GCHQ was making use of its cable-tapping capabilities to monitor YouTube and social media traffic in real-time.

GCHQ’s cable-tapping and internet buffering capabilities , codenamed Tempora, were disclosed by the Guardian in June, but the new documents published by NBC from a GCHQ presentation titled “Psychology: A New Kind of SIGDEV" set out a program codenamed Squeaky Dolphin which gave the British spies “broad real-time monitoring” of “YouTube Video Views”, “URLs ‘Liked’ on Facebook” and “Blogspot/Blogger Visits”.

A further slide noted that “passive” – a term for large-scale surveillance through cable intercepts – give the agency “scalability”.

The means of interception mean GCHQ and NSA could obtain data without any knowledge or co-operation from the technology companies. Spokespeople for the NSA and GCHQ told NBC all programs were carried out in accordance with US and UK law.

Tesla Completes L.A.-to-New York Electric Model S Drive Chargers - Bloomberg

$
0
0

Comments:"Tesla Completes L.A.-to-New York Electric Model S Drive Chargers - Bloomberg "

URL:http://www.bloomberg.com/news/2014-01-26/tesla-completes-l-a-to-new-york-electric-model-s-drive-chargers.html


Tesla Motors Inc. (TSLA)’s Elon Musk said the electric-car maker has expanded its U.S. network of rapid chargers to let owners of battery-powered Model S sedans drive their cars from coast to coast for the first time.

Musk, Tesla’s chief executive officer and co-founder, said last year the company would set up “Superchargers” in most major U.S. and Canadian cities to permit long-distance trips solely on electricity provided at no charge. The carmaker has more than 70 stations in North America, according to Tesla’s website.

“Tesla Supercharger network now energized from New York to LA, both coast + Texas!” Musk said in a Twitter post yesterday. “Approx 80% of US population covered.”

Related:Musk’s Fire Numbers Are a Stretch, But Teslas Are Safe

Tesla, seeking to be the world’s leading maker of all-electric autos, needs the broader network of charging stations to address the limited driving range and long charge times of battery cars. Without the stations, Tesla drivers are limited by the estimated 265-mile (426-kilometer) range of a Model S battery, which can take as long as 9 hours to repower.

Musk has said the chargers, which the company says are the fastest available, are installed near major highway interchanges on properties close to restaurants, cafés or shopping to allow drivers to take breaks while their vehicles are repowered.

The Superchargers, currently compatible only with the Model S, provide 170 miles of range in a 30-minute charge, according to the company. The cheapest version of the Fremont, California-built car enabled to work with the Superchargers costs $73,070, according to Tesla’s website.

Two teams of Tesla drivers will try to set U.S. cross-country electric vehicle speed records using the chargers, departing Jan. 31 from Los Angeles and arriving in New York Feb. 2, said Musk, 42. He also plans a “LA-NY family road trip over Spring Break,” using the system, he said on Twitter.

The company named for inventor Nikola Tesla more than quadrupled in value in 2013. Tesla fell 3.8 percent to $174.60 at the close Jan. 24 in New York.

To contact the reporter on this story: Alan Ohnsman in Los Angeles at aohnsman@bloomberg.net

To contact the editor responsible for this story: Jamie Butters at jbutters@bloomberg.net

fogus: Timothy Hart, Rest in Peace.

$
0
0

Comments:"fogus: Timothy Hart, Rest in Peace."

URL:http://blog.fogus.me/2014/01/27/timothy-hart-rest-in-peace/


Jan 27, 2014

On January 20, 2014 the world lost a hacker named Timothy Hart. Hart was not just any hacker — he was a hacker of the highest order. If you’ve read more than three books on Lisp then you might have seen his name pop up here and there. If not, then you’ve definitely felt his influence.

Macros

Macros are a ubiquitous element for any Lisp language in use today, but the fact is that they were not part of the early LISP implementations. Instead, LISP macros were invented years after Russell’s first implementation by Hart and were described in a short MIT memo entitled MACRO Definitions for LISP. This short but epic document described an extension to the original seven-function LISP eval that looked effectively like the following:

((eq (caar expr) (quote macro))
 (cond
 ((eq (cadar expr) (quote lambda))
 (eval (eval (car (cdddar expr))
 (cons (list (car (caddar expr)) 
 (cadr expr)) 
 binds))
 binds))))

After reading this paper I took some time to extend my tiny Lisp interpreter Lithp to support Hart-style macros. Please take a moment to read the original paper; you won’t be disappointed. It’s striking how a feature so powerful and eventually far-reaching was described via a humble, 3-page memo.1

Alpha-Beta Search

Sticking with short, yet deeply influential memos, Hart and D.J. Edwards described a modification to the known minimax algorithm that would change game search forever. The paper, The Alpha-Beta Heuristic is 5-pages of astonishment and still stands as one of the best descriptions of the technique. An interesting fact about Alpha-Beta pruning is that it was independently invented by numerous people and organizations. So go great ideas.

LISP 1.5

While a comprehensive write-up of Hart’s work would lead to a much longer post, I wanted to highlight three that personally influenced me in my programming life. While both the macro and alpha-beta papers were important to me, neither more deeply influenced my thinking more than the LISP 1.5 Programmers Manual. Hart is listed as one of the co-authors of this amazing work and while I’m not certain of his precise contributions to the manual, his association with the book and his contributions to this early LISP implementation put him amongst the most influential Lisp hackers in the history of the young computing industry. The LISP 1.5 Programmer’s Manual is near the top of any list of book that every programmer should read.

Timothy Hart. Thank you and rest in peace.

:F

BBC News - US makes Bitcoin exchange arrests after Silk Road closure

$
0
0

Comments:"BBC News - US makes Bitcoin exchange arrests after Silk Road closure"

URL:http://www.bbc.co.uk/news/technology-25919482


27 January 2014Last updated at 14:28 ETBy Dave LeeTechnology reporter, BBC News

The operators of two exchanges for the virtual currency Bitcoin have been arrested in the US.

The Department of Justice said Robert Faiella, known as BTCKing, and Charlie Shrem from BitInstant.com have both been charged with money laundering.

The authorities said the pair were engaged in a scheme to sell more than $1m (£603,000) in bitcoins to users of online drug marketplace the Silk Road.

The site was shut down last year and its alleged owner was arrested.

Mr Shrem, 24, was arrested on Sunday at New York's JFK airport. He was expected to appear in court on Monday, prosecutors said.

Mr Faiella, 52, was arrested on Monday at his home in Cape Coral, Florida.

Continue reading the main story

HOW BITCOINS WORK

Bitcoin is often referred to as a new kind of currency.

But it may be better to think of its units as being virtual tokens that have value because enough people believe they do and there is a finite number of them.

Each bitcoin is represented by a unique online registration number.

These numbers are created through a process called "mining", which involves a computer solving a difficult mathematical problem with a 64-digit solution.

Each time a problem is solved the computer's owner is rewarded with bitcoins.

To receive a bitcoin, a user must also have a Bitcoin address - a randomly generated string of 27 to 34 letters and numbers - which acts as a kind of virtual postbox to and from which the bitcoins are sent.

Since there is no registry of these addresses, people can use them to protect their anonymity when making a transaction.

These addresses are in turn stored in Bitcoin wallets, which are used to manage savings. They operate like privately run bank accounts - with the proviso that if the data is lost, so are the bitcoins contained.

Bitcoin exchanges are services that allow users to trade bitcoins for traditional currencies.

Mr Shrem is accused of allowing Mr Faiella to use BitInstant to purchase large quantities of bitcoins to sell on to Silk Road users who wanted to anonymously buy drugs.

The authorities said Mr Shrem was aware that the bitcoins were being used for such purchases, and therefore he was in violation of the Bank Secrecy Act.

The Act requires financial institutions in the US to alert authorities to any suspicious activity that may suggest money laundering is taking place.

Emily Spaven, managing editor of news site Coindesk, told the BBC: "Since the closure of Silk Road and arrest of alleged owner Ross Ulbricht, we always knew more arrests would follow.

"It is unfortunate Silk Road continues to make the headlines in association with Bitcoin - this is the dark side of Bitcoin, which the vast majority of digital currency users have no association with."

'Feigning ignorance'

Following the arrests, James Hunt, from the US Drug Enforcement Agency, said in a statement: "Hiding behind their computers, both defendants are charged with knowingly contributing to and facilitating anonymous drug sales, earning substantial profits along the way.

"Drug law enforcement's job is to investigate and identify those who abet the illicit drug trade at all levels of production and distribution, including those lining their own pockets by feigning ignorance of any wrong doing and turning a blind eye."

Mr Shrem is a founding member and the current vice chairman of the Bitcoin Foundation, a trade group set up to promote Bitcoin as an alternative currency.

"We are surprised and shocked by the news today," said a spokesman for the organisation.

"As a foundation, we take these allegations seriously and do not condone illegal activity."

Please turn on JavaScript. Media requires JavaScript to play.

The BBC's Rory Cellan Jones explains how Bitcoin works

BitInstant was one of the largest Bitcoin exchanges on the internet.

However, the service has been inaccessible for some time, explained Mike Hearn, another board member at the Bitcoin Foundation.

"Charlie's impact on the Bitcoin community has been hovering near zero for a long time now," Mr Hearn told the BBC via email.

"If the allegations are true, it's part of a phase of Bitcoin's life that the project is rapidly leaving behind (and good riddance)."

'Deeply concerned'

BitInstant's investors include Tyler and Cameron Winklevoss - the twins who previously sued Mark Zuckerberg claiming he had stolen their idea for Facebook.

In a statement issued to the BBC, the twins said: "When we invested in BitInstant in the fall of 2012, its management made a commitment to us that they would abide by all applicable laws - including money laundering laws - and we expected nothing less.

"We are obviously deeply concerned about [Mr Shrem's] arrest. We were passive investors in BitInstant and will do everything we can to help law enforcement officials.

"We fully support any and all governmental efforts to ensure that money laundering requirements are enforced, and look forward to clearer regulation being implemented on the purchase and sale of bitcoins."

Follow Dave Lee on Twitter @DaveLeeBBC

The Great Code Club


In the world of role-playing war games, Volko Ruhnke has become a hero - The Washington Post

$
0
0

Comments:"In the world of role-playing war games, Volko Ruhnke has become a hero - The Washington Post"

URL:http://www.washingtonpost.com/lifestyle/magazine/in-the-world-of-role-playing-war-games-volko-ruhnke-has-become-a-hero/2014/01/10/a56ac8d6-48be-11e3-bf0c-cebf37c6f484_story.html


In the world of war games, Volko Ruhnke has become a hero

War game designer Volko Ruhnke plays the board game Angola at the World Boardgaming Championships in Lancaster, Pa., in August. (Al Tielemans/For The Washington Post)

Jason Albert

At the last stop on a wooded cul-de-sac 15 miles outside of Washington, four middle-age men huddle around a table to decide the fate of Afghanistan. A map is spread before them. Colored wooden cubes and discs denote military installations, troops and insurgents. A subtle movement — pieces slid from Nuristan province to Kabul — is met with tensed shoulders and exhaled expletives. In the north, the Warlords prep an opium harvest while threatening a terrorist attack. Elsewhere, everywhere, the Taliban is filthy with car bombs, roadside IEDs and suicide bombers.

The fifth man in the room, a CIA national security analyst named Volko Ruhnke, called us here. The palpable discomfort among us brings him joy. It means he has done his job.

When not ensconced at HQ in Langley, Ruhnke, 51, designs commercial wargames. He has invited us to his home in Vienna to playtest his most recent, A Distant Plain. Along with Cuba Libre, they’ll be his fourth and fifth published board games, and the latest in his series simulating insurgencies throughout history: Colombia, Afghanistan and Cuba, with Vietnam, Ireland and the Philippines to follow.

“Jason, you’re letting our country go to s---,” comes a voice from my left, a Virginia drawl. One of my rivals, a 20-year Marine, now retired, and veteran of three Afghanistan deployments, is manning the Afghan government. He’s peeved because I, as the Coalition, am unconcerned with the Warlord threat. He’s right: I don’t care. I find his policies equally irksome, as he spent all of our shared aid securing popular support — support I know he’ll soon undo and dole out as political patronage.

In the game as in real life, the Afghan government and Coalition are ostensibly allies. And in the game as in real life, “ally” has loose meaning. I ignore him and drag his troops to the south to make a move against the Taliban. He scowls as Ruhnke beams. Ruhnke wants us to undermine each other. This is how he designed the game. This is how we learn.

Ruhnke is obviously enjoying his roles as party host, rules expert and teacher. He leans in. He speaks only when needed or pressed, and his explanations arrive with cheerful excitement but also a hint of gravitas, like that of a father patiently conveying hard facts to his children.

“So you’re telling me terror is always effective for the Taliban?” the Afghanistan vet squawks. “It shouldn’t always work; it has to have the possibility to backfire.”

Ruhnke answers without hesitation: “It always works. But remember, I’m not going for a high-fidelity model of district-level counterinsurgency operations.” He adjusts his frameless glasses. “That particular instance of terror is what happened in Nuristan over months of time ... I’m most concerned about delivering the inter-factional pressures and politics.”

Later, when I need the long-gone aid money, I fire back at the vet. “What are you doing over there, Karzai? Remember, this is our cash. Share.”

Ruhnke jumps in again, building bridges. “Tell Jason it’s not corruption. It’s just your traditional way of running things. You have to live here; he’ll eventually leave.” The Taliban leader across the table makes no attempt to stifle a giggle as he reaches for pretzels.

Ruhnke thinks of a day, however remote, when his games might sit on store shelves next to the classics. He sees people just like us having epiphanies through gamed agitation, quick bonds such as ours forged within an intense, inhabitable narrative. But mostly his goal is as unique as it is stark: to educate by providing tabletop insurgencies for any board gamer who would like one.

A game card from A Distant Plain, an Afghanistan-themed wargame created by Ruhnke. (Joshua Yospyn/For The Washington Post)

Although wargames always have been a niche within the board gaming market, there was a time when they held a level of pop culture legitimacy. According to James Dunnigan in “Wargames Handbook” — Dunnigan being one of wargaming’s founding fathers at the now-defunct Simulations Publications Inc. — more than 2 million were sold in 1980.

Wargamers have nearly as many definitions for what qualifies as a wargame as there are conflicts throughout history to simulate. Some say highly abstracted games such as chess fit; others include big sellers such as Axis & Allies and Risk. But, on the whole, hobby wargames are the literary fiction of the gaming universe: dense and respected but often existing in the margins.

Between the release of what most consider the first commercial wargame in 1954, Tactics, and the hobby’s high-water mark in early ’80s, wargames became increasingly complex, often packaged with byzantine rule books and playtimes measured in days, not hours. Fewer than 20 years after the peak, both of the largest wargame publishers ceased to exist for factors including business missteps and the rise of electronic and computer gaming. But there’s been a renaissance, in large part because of the Internet’s ability to facilitate global democratic conversations among like-minded wargame zealots.

Beginning as a 10-year-old more than three decades ago, I spent innumerable hours hunched over wargames playing commander. My parents didn’t understand. My friends who saw the sun regularly didn’t either. But I wasn’t alone. Even though I left wargaming behind, I never forgot the games’ ability to evoke a sense of time, place and history. After stumbling on them again a few years ago, I found a rabid subculture both familiar and unknown. There were hundreds of new games holding echoes of the era I’d cut my teeth on, but with new mechanics and streamlined gameplay that created stories more nuanced than I’d ever experienced.

Ruhnke’s games stood out. He was tackling recent and still-raging conflicts, such as the amorphous war against terrorism in his game Labyrinth. It seemed as if he was attempting to span the divide between the kitchen-table gamer and the grizzled hard cases from wargaming’s first golden age. I contacted him and asked if this was true.

“It may be a big ambition,” he said, “but I most definitely want to interest board gamers from all traditions in meaningful topics like insurgency. Fun and accessibility are a big part of getting them there.”

Then he invited me to Virginia to play some games.

Ruhnke, a national security analyst with the CIA, holds his fully functioning reproduction of a 1762 Brown Bess musket, with its bayonet removed. (Joshua Yospyn/For The Washington Post)

The first thing you notice when you walk into Ruhnke’s design-studio-cum-game-room is his reproduction of a Brown Bess musket. Hanging high on the wall, it visually dominates the period maps, shadowbox of hand-collected Revolutionary and Civil War bullets, prototypes, books and war-themed gewgaws on every flat surface.

While Ruhnke is reluctant to divulge details regarding his CIA work, he’s transparent on how his professional life has dovetailed with his decades-long wargaming passion. Ruhnke’s path was set in sixth grade with his first wargames, PanzerBlitz and France, 1940. Then, the games for him were all problem solving and decision points. The interest in the why and how didn’t come until the early ’80s, when he was an undergrad at the College of William and Mary. “The colonial remnants got into my blood, and the French and Indian War captivated me,” he said. “It was the 18th-century warfare with powdered wigs, supply lines and siegeworks taking place in the wilderness.” Like any proper autodidact, he spent years reading, walking the battlefields and immersing himself in the minutiae of this oft-overlooked war.

His push into a career in intelligence came in graduate school at Georgetown but once again pointed back to the strategic riddles he encountered playing wargames. Many of his professors were in- and out-of-power politicos. He realized a person could be smart and still end up on the wrong side of an issue. “I didn’t want to be in that position,” he said. “I wanted to improve policy by elevating the debate. Providing information and analysis gives the best chance of pursuing the right strategy.”

Another card from A Distant Plain. (Joshua Yospyn/For The Washington Post)

In the ’90s, while at the CIA, Ruhnke designed a role-play session for his work friends. The Seven Years’ War role-play morphed into a board game. Then he submitted his design to GMT Games, the modern hobby’s highest-profile wargame publisher. Wilderness War was released in 2001 and is now one of GMT’s all-time bestsellers. This led Gene Billingsley, a GMT principal, to approach Ruhnke in 2009 with a commission to create a game about the war against terrorism. About a year later Labyrinth became another bestseller and industry award-winner. Now it was Ruhnke’s turn. He told Billingsley he had idea for a game series on insurgency. The first would be set in 1990s Colombia.

“I loved the idea but hated the topic,” Billingsley said. “I told him I couldn’t sell it.” Then Billingsley played Ruhnke’s prototype.

Listing for $75 retail, Andean Abyss was another hit in 2012, and the COIN (counterinsurgency) Series was launched. Soon after, designers began approaching him asking to use his core ideas, while, at the same time, Ruhnke was reaching out to the industry’s most respected topic experts.

One of those collaborations is a Vietnam War-themed game called Fire in the Lake — the title giving a nod to Frances FitzGerald’s Pulitzer-winning book. Both posit an insurgency wrapped in a conventional war. GMT has a preorder system wherein a threshold must be reached before a game is sent to final production. It’s not unusual for that to take months or years as orders trickle in. Fire in the Lake hit its number within four days.

Some of the war games Ruhnke has created. (Joshua Yospyn/For The Washington Post)

While different in feel and detail, all of Ruhnke’s COIN Series games use the same simple mechanism: a deck of brightly colored cards featuring actual or generalized historical events. An example from A Distant Plain would be “U.S.-Pakistan Talks.” Cards are flipped two at a time. One card is live; the second allows players to see what’s coming. Two factions are allowed to act on the live card; then the other two factions, on the next.

The first-choice player opts to trigger the event and listed outcomes. Each event has two possible paths: one interpretation benefiting the insurgents; one, the counterinsurgents. In the case of “U.S.-Pakistan Talks,” the card could worsen the relationship between the United States and Pakistan, making it easier for the Taliban to operate. Or if a counterinsurgent faction could pick the event, it may choose less antagonistic effects. But it’s not an either-or decision. A faction could bypass the event as if it never happened and, instead, select from a list of faction-specific operations. The Coalition can train troops, patrol Afghanistan’s ring road, sweep into provinces to locate insurgents, or assault. The Taliban’s options include rallying to recruit guerrillas, marching, attacking or executing terrorism. The unique history of each conflict is then baked in, but it never arrives in the same sequence from game to game, if at all.

To win A Distant Plain, the Afghan government has to control as much of the population as possible. The Coalition wants support for the current regime and as many of their pieces off the board, out of harm’s way. The Taliban work to intimidate the population into opposing the government, and the Warlords care little about support or opposition, only that no one is in control so they can traffic drugs with impunity.

Characteristic details aside, all of the COIN Series games are exercises in restraint, tenuous diplomacy and management of chaos.

A view of A Distant Plain. (Al Tielemans/For The Washington Post)

We’re hours into our war and no longer strangers. Jeff Gringer, known to us as the Taliban, stands and thrusts a hooked finger in my direction while declaring he’s going to “pop those Coalition troops in Helmand.” The Taliban is using a car bomb to ambush my men. I rock back in my chair, resigned to my fate.

Robert Leonhard, pulling the strings for the Warlords, is in real life a national security analyst at the Johns Hopkins Applied Physics Laboratory and a retired Army lieutenant colonel. He has been sitting in mostly quiet concentration but finally speaks up: “I hate to hear you say that, Jeff. My oldest is in Helmand province.” He pauses, moving his glasses from his nose to the top of his graying high-and-tight. “I think. He can’t tell me exactly where he is.”

A thick silence covers the room. My thoughts move to a country half a world away as I consider my deficiencies in truly understanding a war I’ve been reading about for 10 years. On the board in front of us, my troops don’t make it back. I move them to a brown box marked “Casualties.” They’re just wooden cubes. Still, I find it hard to make eye contact with Leonhard.

But a half-hour later Leonhard is asking to play again. The games provide a first-person opportunity to rewrite history where books, movies and video games fall short. Leonhard wants to play again because there’s much more to learn and understand.

Soon after, I execute a quick and dirty surge to bring my pieces home, in this case mirroring future history. I win the game while leaving our Afghanistan on the cusp of anarchy. The victory leads to sincere handshakes all around and a dense discussion of American foreign policy.

This is where Ruhnke parts company from many of his wargame designer peers. Drawing out dichotomous reactions such as Leonhard’s are purpose-built into his games. He wants to touch those who maybe shouldn’t be having as much fun as they do, while also reaching across the aisle to players for whom board games are primarily a tactile brainteaser and social activity.

Elegant as the COIN Series is, I ask Ruhnke if his goals are a stretch. What’s going to happen when one of the uninitiated sits down and realizes his options don’t include shuffling resources to trade goods in the Mediterranean, but rather terrorism, assassination and extortion? “You’re bound to launch some highly consequential thinking,” he says. “Not only about what should and shouldn’t be fun, and why it is, but about the world you live in.”

Ruhnke discusses Labyrinth, a game he designed, during the World Boardgaming Championships. (Al Tielemans/The Washington Post)

The World Boardgaming Championships in Lancaster, Pa., is far from the biggest board gaming convention in the world, but for many wargamers, it matters the most. Over seven days, nearly 2,000 gamers descend on the Lancaster Host Resort to compete against the best in the hobby. In the parlance of the convention, many of the events are “shark tanks” with unwary chum regularly exiting in mortifyingly short time frames.

Designers also use the convention as a petri dish. If a game has cracks, they’ll be discovered here. Ruhnke and Mark Herman, his defense analyst design partner for Fire in the Lake, will run walk-up sessions of the game throughout the week.

I wander to the GMT demo area through rows of skirted tables dotted with games and surrounded by players hunched in thought. Nametag lanyards are worn backward so as not to disturb the pieces. The kibitzing thrum filling the room is punctured by shouts of cheer. I locate Ruhnke flanked by two active games of Fire in the Lake and a gaggle of paunchy, pasty onlookers. Cuba Libre, an exploration of Fidel Castro’s insurgency in 1957-58, is unboxed on the table. This is the first time Ruhnke has seen the printed game.“We should play,” Ruhnke says. “Who wants to play?” Four hands shoot up, including mine.

Ruhnke plays with his son Andrew. Hanging on the wall behind them is a reproduction of a Queen Anne pistol. (Joshua Yospyn/For The Washington Post)

Peter Perla, the principal research scientist at the Center for Naval Analysis and author of “The Art of Wargaming,” thinks that Ruhnke has created game effects that cross over among different types of gamers, but also the professional and hobby communities.

“Wargaming is a powerful medium,” Perla says. “Even within a simulation, when you’re responsible for lives and whether your country breaks, you’d better think carefully. If Ruhnke wasn’t as knowledgeable and sensitive to these issues, his games would be a very different, very unpleasant experience.”

Over the five days I spend at the WBC, Ruhnke is rarely alone. If it’s not well-wishers or the curious lined up, it’s other designers wanting to ask questions or chat.

At 10:15 p.m. a flatbed hand truck towering with brown boxes is wheeled into the room. A Distant Plain has finally arrived. The representative from GMT Games is instantly swarmed. Grown men grab their booty and scatter to quiet corners to tear at the shrink-wrap.

Later in the week, I catch Ruhnke walking around the wargaming room. He moves from table to table slowly with seeming purpose, but he offers no commentary. Then he stops, crosses his arms, and looks at the tournament playing out in front of us. The game is Waterloo, first released in 1962.

“It’s great to hit preorder numbers, sell games, know there’s buzz,” he says. “But if someone were still playing my games 40 or 50 years from now? Even if it were only a couple people? That’s lasting. I work in intelligence. There’s not a lot that lasts in intelligence. All I want is to see people playing. That’s immortality.”

Jason Albert is a writer living in St. Paul, Minn.

E-mail us at wpmagazine@washpost.com.

For stories, features such as Date Lab, Gene Weingarten and more, visit WP Magazine.

Follow the Magazine on Twitter.

Like us on Facebook.

12 Amazing Bootstrapped Companies

$
0
0

Comments:"12 Amazing Bootstrapped Companies"

URL:http://beatrixapp.com/blog/12-amazing-bootstrapped-companies.html


There is a new American Dream, though it might be more accurate to call it the "Silicon Valley Dream" - Come to Silicon Valley with a world-changing idea, get funded, and change the world. But is it the only way that startups can achieve phenomenal success?

The startups in this list - many of them incredibly successful in their own right - certainly prove otherwise.

Here are 12 amazing startups that have bootstrapped their way to profitability.

1. Carbonmade

What is it?Carbonmade is an online portfolio service that helps you show off your work.

Why is it amazing? Originally conceived as a tool to create an online portfolio for himself, Dave Gorum, together with his "code wizard" Jason Nelson, ended up opening up this tool to the world due to popular demand. Come 2007, things were going well enough for the boys to drop their client work altogether and work on Carbonmade full time. Today, it is home to 500,000 incredible portfolios - not bad for something that started as a personal project!

2. Github

What is it?Github is a web-based hosting service for software development projects that use the Git revision control system. Say what? Think of it as the Wikipedia for programmers.

Why is it amazing? The founder, Tom Preston-Werner, turned down $300,000 from Microsoft back in 2008 in order to go full-time on GitHub. 5 years later, GitHub hit the 4 million user mark.

3. Clicky

What is it?Clicky is a tool that provides users with real-time traffic intelligence for their websites.

Why is it amazing? Clicky is run by just 2 people (!). The diminutive company did over $500,000 in revenue in 2009, and today has 150,000 customers, 15,000 of which are paying between $4.99-$24.99 monthly.

4. WooThemes

What is it?WooThemes is a WordPress theme and plugin provider with tons of free and commercial products available to jumpstart your website.

Why is it amazing? The founding trio - Adii Pienaar, Mark Forrester, and Magnus Jepson - started off as a virtual team. Today, half of the team works remotely from various European locations, while the other half is located in the Cape Town office. The result: From 2008 till 2011, they were generating revenue consistently, with a 10-15% month-on-month growth. Hurray for remote working! (Sorry, Marissa)

5. AppSumo

What is it?AppSumo is a daily deals website that promotes great products to make work fun.

Why is it amazing? AppSumo shot off like a rocket from the get-go, growing to 500,000 customers in just 18 months. Of course, this is not so surprising, since the founder is none other than Noah Kagan, formerly employee number 30 at Facebook.

6. Mailchimp

What is it?Mailchimp is an online email marketing solution to manage contacts, send emails and track results.

Why is it amazing? Early last year, Mailchimp posted some numbers on their blog while undergoing server spring cleaning - and they say it all. Here's a sample: In 2009, they had 100k users; in 2012, they had 1.2 million users in 158 countries. And another, for good measure: They deliver 2 billion emails per month (?!), and are still growing. Unbelievable.

7. 37Signals

What is it?37Signals is a web application company that produces simple, focused software.

Why is it amazing? It's hard to pick just one reason why 37Signals is so amazing. One of these reasons would be that, to date, the company's talented partners, Jason Fried and David Heinemeier Hansson (or DHH), have co-written three bestselling books: Getting Real, Rework, and Remote. These books continue to be an inspiration to forward-looking entrepreneurs and business owners worldwide. Another would be that DHH, you know, created an entire web development framework called Ruby on Rails.

8. Envato

What is it?Envato is an ecosystem of sites that help people be creative.

Why is it amazing? Envato Marketplaces, where creatives could sell their products, has been doing incredibly well, with the top seller (as of 2010) selling up to $500,000 in gross sales in under two years. Bear in mind that the founders, 3 designers and a physicist, had zero business experience starting out.

9. Litmus

What is it?Litmus is an application with email marketing testing and tracking tools, that help you send better looking and performing email.

Why is it amazing? Founded in 2005, by 2010 Litmus was significantly above $1m in revenue, and growing by about 10% per month. Pretty impressive for an application that was built in a dorm room by founder Paul Farnell, with just a used computer and a few hundred bucks (and over a single weekend at that).

10. BigCommerce

What is it?BigCommerce is an ecommerce software solution that gives you everything you need to sell online.

Why is it amazing? Despite tough competition from numerous other ecommerce platforms, BigCommerce has held their own admirably, seeing "hockey stick" growth in number of clients, at about 100% year-on-year. Their additional marketing services - which have set them apart from their competitors - have certainly played a big part in making this happen.

11. Braintree

What is it?Braintree performs online and mobile payments for companies around the globe.

Why is it amazing? Right from the start, Braintree founder Bryan Johnson decided to stick to the premium, rather than freemium, model. Turns out, customers are willing to pay for good service, and in 2010 Braintree generated $4.5m in revenue, in addition to doubling its customer base. Fun fact: 99% of their customers come through word-of-mouth. Braintree was acquired by PayPal in 2013 for a reported $800 million.

12. FreshBooks

What is it?Freshbooks is a cloud accounting service that helps service-based small business owners manage their time and expenses.

Why is it amazing? Mike McDerment, founder of FreshBooks, started off the first 24 months with 10 paying customers and revenues of $99 per month - not exactly success story material. Fast-forward to today, and over 5 million people have used FreshBooks, with paying customers in over 120 countries.

Manually Creating an ELF Executable

$
0
0

Comments:"Manually Creating an ELF Executable"

URL:http://robinhoksbergen.com/papers/howto_elf.html


Hello class, and welcome to X86 Masochism 101. Here, you'll learn how to use opcodes directly to create an executable without ever actually touching a compiler, assembler or linker. We'll be using only an editor capable of modifying binary files (i.e. a hex editor) and 'chmod' to make the file executable.[1]

If that doesn't turn you on, I don't know what will.

On a more serious note, this is one of those things that I personally think are a lot of fun. Obviously, you're not going to be using this to create serious million-line programs. However, it can give you an enormous amount of satisfaction to know that you actually understand how this kind of thing really works on a low level. It's also cool to be able to say you wrote an executable without ever touching a compiler or interpreter. Beyond that, there are applications in kernel programming, reverse engineering and (perhaps unsurprisingly) compiler creation.

First of all, let's take a very quick look at how executing an ELF file actually works. Many details will be left out. What's important is getting a good idea of what your PC does when you tell it to execute an ELF binary file.

When you tell the computer to execute an ELF binary, the first thing it'll look for are the appropriate ELF headers. These headers contain all sorts of important information about CPU architecture, sections and segments of the file, and much more - we'll talk more about that later. The header also contains information that helps the computer identify the file as ELF. Most importantly, the ELF header contains information about the program header table in the case of an executable, and the virtual address to which the computer transfers control upon execution.

The program header table, in turn, defines several segments in program headers. If you've ever programmed in assembly, you can think of some of the sections such as 'text' and 'data' as segments in an executable. The program headers also define where the data of these segments are in the actual file, and what virtual memory address to assign to them.

If everything's been done correctly, the computer loads all segments into virtual memory based on the data in the program headers, then transfers control to the virtual memory address assigned in the ELF header, and starts executing instructions.

Before we begin with the practical stuff , please make sure you've got an actual hex editor on your computer, and that you can execute ELF binaries and are on an x86 machine. Most hex editors should work as long as they actually allow you to edit and save your work - I personally like Bless . If you're on Linux, you should be fine as far as ELF binaries are concerned. Some other Unix-like operating systems might work, too, but different OSes implement things in slightly different ways, so I cannot be sure. I also use system calls extensively, which further limits compatibility. If you're on Windows, you're out of luck. Likewise if your CPU architecture is anything other than x86 (though x86_64 should work), since I simply cannot provide opcodes for each and every architecture out there.

There are three phases to creating an ELF executable. First , we'll construct the actual payload using opcodes. Second , we'll build the ELF and program headers to turn this payload into a working program. Finally, we'll make sure all offsets and virtual addresses are correct and fill in the final blanks.

A word of warning: constructing an ELF executable by hand can be extremely frustrating. I've provided an example binary myself which you can use to compare your work to, but keep in mind that there is no compiler or linker to tell you what you've done wrong. If (read: when) you screw up, all your computer will tell you is 'I/O Error' or 'Segmentation Fault', which makes these programs extremely hard to debug. No debugging symbols for you!

Constructing the payload - let's try to keep the payload simple but sufficiently challenging to be interesting. Our payload should put 'Hello World!' on the screen, then exit with code 93. This is harder than it looks. We'll need both a text segment (containing executable instructions) and a data segment (containing the 'Hello World!' string and some other minor data. Let's take a look at the assembly code we need to achieve this:

(text segment)
mov ebx, 1
mov eax, 4
mov ecx, HWADDR
mov edx, HWLENADDR
int 0x80
mov eax, 1
mov ebx, 0x5D
int 0x80

The code above isn't too hard, even if you've never done much assembly. Interrupt 0x80 is used to make system calls, with the values in the registers EAX and EBX telling the kernel what kind of call it is. You can get a more comprehensive reference of the system calls and their values in assembly here.

For our payload, we'll need to convert these instructions to hexadecimal opcodes. Luckily, there are good online references that help us do just that. Try to find one for the x86 family, and see if you can figure out how to go from the above code to the hex codes below:

0xBB 0x01 0x00 0x00 0x00
0xB8 0x04 0x00 0x00 0x00
0xB9 0x** 0x** 0x** 0x**
0xBA 0x** 0x** 0x** 0x**
0xCD 0x80
0xB8 0x01 0x00 0x00 0x00
0xBB 0x5D 0x00 0x00 0x00 
0xCD 0x80

(The *s denote virtual addresses. We don't know these yet, so we'll leave them blank for now. [2])

The second part of the payload consists of the data segment, which is actually just the string "Hello World!\n" and a byte that contains the length of the string (0xD). Use a nice ASCII conversion table ('man ascii', anyone?) to convert these values to hex, and you'll see that we'll get the following data:

(data segment)
0x48 0x65 0x6C 0x6C 0x6F 0x20 0x57 0x6F 0x72 0x6C 0x64 0x21 0x0A 0x0D

And there's our final payload!

Building the headers - this is where it can get very complicated very quickly. I'll explain some of the more important parameters in the process of building the headers, but you'll probably want to take a good look at a somewhat larger reference if you're ever going to build ELF headers completely by yourself.

An ELF header has the following structure, byte size between parentheses:

e_ident(16), e_type(2), e_machine(2), e_version(4), e_entry(4), e_phoff(4),
e_shoff(4), e_flags(4), e_ehsize(2), e_phentsize(2), e_phnum(2), e_shentsize(2)
e_shnum(2), e_shstrndx(2)

Now we'll fill in the structure, and I'll explain a bit more about these parameters where appropriate. You can always check the reference I linked to before if you want to find out more.

e_ident(16) - this parameters contains the first 16 bytes of information that identifies the file as an ELF file. The first four bytes always hold 0x7F, 'E', L', F'. Bytes five to seven all contain 0x01 for 32-bit binaries on lower-endian machines. Bytes eight to fifteen are padding, so those can be 0x00, and the sixteenth byte contains the length of this block, so that has to be 16 (=0x10).
e_type(2) - set it to 0x02 0x00. This basically tells the computer that it's an executable ELF file.
e_machine(2) - set it to 0x03 0x00, which tells the computer that the ELF file has been created to run on i386 type processors.
e_version(4) - set it to 0x01 0x00 0x00 0x00.
e_entry(4) - transfer control to this virtual address on execution. We haven't determined this, yet, so it's 0x** 0x** 0x** 0x** for now.
e_phoff(4) - offset from file to program header table. We put it right after the ELF header, so that's the size of the ELF header in bytes: 0x34 0x00 0x00 0x00.
e_shoff(4) - offset from file to section header table. We don't need this. 0x00 0x00 0x00 0x00 it is.
e_flags(4) - we don't need flags, either. 0x00 0x00 0x00 0x00 again.
e_ehsize(2) - size of the ELF header, so holds 0x34 0x00.
e_phentsize(2) - size of a program header. Technically, we don't know this yet, but I can already tell you that it should hold 0x20 0x00. Scroll down to check, if you like.
e_phnum(2) - number of program headers, which directly corresponds to the number of segments in the file. We want a text and a data segment, so this should be 0x02 0x00.
e_shentsize(2), e_shnum(2), e_shstrndx(2) - all of these aren't really relevant if we're not implementing section headers (which we aren't), so you can simply set this to 0x00 0x00 0x00 0x00 0x00 0x00.

And that's the ELF header! It's the first thing in the file, and if you've done everything correctly the final header should look like this in hex:

0x7F 0x45 0x4C 0x46 0x01 0x01 0x01 0x00 0x00 0x00 0x00 0x00
0x00 0x00 0x00 0x10 0x02 0x00 0x03 0x00 0x01 0x00 0x00 0x00
0x** 0x** 0x** 0x** 0x34 0x00 0x00 0x00 0x00 0x00 0x00 0x00
0x00 0x00 0x00 0x00 0x34 0x00 0x20 0x00 0x02 0x00 0x00 0x00
0x00 0x00 0x00 0x00

We're not done with the headers, though. We now need to build the program header table, too. A program header has the following entries:

p_type(4), p_offset(4), p_vaddr(4), p_paddr(4), p_filesz(4), p_memsz(4),
p_flags(4), p_align(4)

Again, I'll fill in the structure (twice, this time: one for the text segment, one for the data segment) and explain a number of things on the way:

p_type(4) - tells the program about the type of segment. Both text and data use PT_LOAD (=0x01 0x00 0x00 0x00) here.
p_offset(4) - offset from the beginning of the file. These values depend on how big the headers and segments are, since we don't want any overlap there. Keep at 0x** 0x** 0x** 0x** for now.
p_vaddr(4) - what virtual address to assign to segment. Keep at 0x** 0x** 0x** 0x** 0x** now, we'll talk more about it next.
p_paddr(4) - physical addressing is irrelevant, so you may put 0x00 0x00 0x00 0x00 here.
p_filesz(4) - number of bytes in file image of segment, must be larger than or equal to size of payload in segment. Again, set it to 0x** 0x** 0x** 0x**. We'll change this later.
p_memsz(4) - number of bytes in memory image of segment. Note that this doesn't necessarily equal p_filesz, but it may as well in this case. Keep it at 0x** 0x** 0x** 0x** for now, but remember that we can later set it to the same value we assign to p_filesz.
p_flags(4) - these flags can be tricky if you're not used to working with them. What you need to remember is that READ permissions is 0x04, WRITE permissions is 0x02, and EXEC permissions is 0x01. For the text segment we want READ+EXEC, so 0x05 0x00 0x00 0x00, and for the data segment we prefer READ+WRITE+EXEC, so 0x07 0x00 0x00 0x00.
p_align(4) - handles alignment to memory pages. Page size are generally 4KiB, so the value should be 0x1000. Remember, x86 is little-endian, so the final value is 0x00 0x10 0x00 0x00.

Whew. We've certainly done a lot now. We haven't yet filled in many of the fields in the program headers, and we're missing a few bytes in the ELF header, too, but we're getting there. If everything went as planned, your program header table (which you can paste directly behind the ELF header, by the way - remember our offset in that header?) should look something like this:

0x01 0x00 0x00 0x00 0x** 0x** 0x** 0x** 0x** 0x** 0x** 0x**
0x00 0x00 0x00 0x00 0x** 0x** 0x** 0x** 0x** 0x** 0x** 0x**
0x05 0x00 0x00 0x00 0x00 0x10 0x00 0x00 
0x01 0x00 0x00 0x00 0x** 0x** 0x** 0x** 0x** 0x** 0x** 0x**
0x00 0x00 0x00 0x00 0x** 0x** 0x** 0x** 0x** 0x** 0x** 0x**
0x07 0x00 0x00 0x00 0x00 0x10 0x00 0x00

Filling in the blanks - while we've finished most of the hard work by now, there are still some tricky things we need to do. We've got an ELF header and program table we can place at the beginning of our file, and we've got the payload for our actual program, but we still need to put something in the table that tells the computer where to find this payload, and we need to position our payload in the file so it can actually be found.

First, we'll want to calculate the size of our headers and payload before we can determine any offsets. Simply add the sizes of all fields in the headers together and that's the minimal offset for any of the segments. There are 116 bytes in the ELF header + 2 program headers, and 116 = 0x74, so the minimum offset is 0x74. To stay on the safe side, let's put the initial offset at 0x80. Fill 0x74 to 0x7F with 0x00, then put the text segment at 0x80 in the file.

The text segment itself is 34 = 0x22 bytes, which means the minimal offset for the data segment is 0x80 + 0x22 = 0xA2. Let's put the data segment at 0xA4 and fill 0xA2 and 0xA3 with 0x00.

If you've been doing all the above in your hex editor, you will now have a binary file that contains the ELF and program headers from 0x00 to 0x73, 0x74 to 0x7F will be filled with zeroes, the text segment is placed from 0x80 to 0xA1, 0xA2 and 0xA3 are zeroes again, and the data segment goes from 0xA4 to 0xB1. If you're following these instructions and that's not what you've got, now would be a good time to see what went wrong.

Assuming everything's now in the right place in the file, it's time to change some of our previous *s into actual values. I'm simply going to give you the values for each parameter first, and then explain why we're using those particular values.

e_entry(4) - 0x80 0x80 0x04 0x08; We'll choose 0x8048080 as our entry point in virtual memory. There are some rules about what you can and cannot choose as an entry point, but the most important thing to remember is that a starting virtual memory address modulo page size must be equal to the offset in the file modulo page size. You can check the ELF reference and some other good books for more information, but if it seems too complicated, just forget about it and use these values.
p_offset(4) - 0x80 0x00 0x00 0x00 for text, 0xA4 0x00 0x00 0x00 for data. This is because of the obvious reason that that's where these segments are in the file.
p_vaddr(4) - 0x80 0x80 0x04 0x08 for text, 0xA4 0x80 0x04 0x08 for data. We want the text segment to be the entry point for the program, and we're placing the data segment in memory in such a way that it is directly congruent to their physical offsets.
p_filesz(4) - 0x24 0x00 0x00 0x00 for text, 0x20 0x00 0x00 0x00 for data. These are simply the bytesizes of the different segments in the file and memory. In this case, p_memsz = p_filesz, so use those same values there.

The final result - assuming you followed everything to the letter, this is what you would get if you dumped out everything in hex:

7F 45 4C 46 01 01 01 00 00 00 00 00 00 00 00 10 02 00 03 00
01 00 00 00 80 80 04 08 34 00 00 00 00 00 00 00 00 00 00 00
34 00 20 00 02 00 00 00 00 00 00 00 01 00 00 00 80 00 00 00
80 80 04 08 00 00 00 00 24 00 00 00 24 00 00 00 05 00 00 00
00 10 00 00 01 00 00 00 A4 00 00 00 A4 80 04 08 00 00 00 00
20 00 00 00 20 00 00 00 07 00 00 00 00 10 00 00 00 00 00 00
00 00 00 00 00 00 00 00 BB 01 00 00 00 B8 04 00 00 00 B9 A4
80 04 08 BA B1 80 04 08 CD 80 B8 01 00 00 00 BB 2A 00 00 00
CD 80 00 00 48 65 6C 6C 6F 20 57 6F 72 6C 64 21 0A 0D

That's it. Run chmod +x on the binary file and then execute it. Hello World in 178 bytes.[3] I hope you enjoyed writing it. :-) If you thought this HOWTO was useful or interesting, let me know! I always appreciate getting an email. Tips, comments and/or constructive criticism are always welcome, too.

A PDF version of this HOWTO is also available here.

[1] Theoretically, you don't even need to use chmod or a similar command if your configuration files contain stupid umask values. Don't do that, though.

[2] Your hex editor may not allow adding garbage data that isn't actually in hex to your files. If so, it's preferable to use magic hex numbers to denote "This should be changed later", such as 0xDEADBEEF or 0xFEEDFACE.

[3] You could of course go much smaller than that, but that's something for another post.

Test Double | Our Thinking | The Failures of "Intro to TDD"

$
0
0

Comments:"Test Double | Our Thinking | The Failures of "Intro to TDD""

URL:http://blog.testdouble.com/posts/2014-01-25-the-failures-of-intro-to-tdd.html


I'm now halfway through teaching a two-week crash course on "agile development stuff" to a team of very traditional enterprise Java developers. Condensing fifteen years of our community's progress into 8 half-day workshops has presented an obvious challenge: given the clear time constraints, what set of ideas and practices could conceivably have the biggest positive impact on these developers' professional lives?

After a few days of fits and starts, I've come to at least one realization: test-driven development ("TDD") as it's traditionally introduced to beginners is officially off my list.

The problems with how TDD is typically introduced are fundamental, because they put the learner on a path that leads to a destination which might resemble where they want to go, but doesn't actually show them the way to the promised destination itself. This sort of phenomenon happens often enough that I've decided to finally settle on a name: "WTF now, guys?"

Fig. 1 — Seriously, though, WTF?

I think this illustration captures why arguments between developers over TDD tend to be unsatisfyingly dissonant. When a developer lodges a complaint like, "mock objects were everywhere and it was awful", another developer operating from the context of the taller mountain might reply, "huh? Mock objects are everywhere and it's wonderful!" In fact, it's typical for folks to talk over one another in debates about TDD, and I believe it's because we use the same words and tools to describe and practice entirely unrelated activities. What's a valid problem with TDD from the mountain on the left comes across as nonsense to someone scaling the mountain on the right.

If I'm right (judge for yourself below), I think this argument might explain why so many developers who were once excited by the promise and initial experience of TDD eventually grew disillusioned with it.

Teaching "classic TDD" with code katas.

Let's talk about using code katas to teach TDD.

I started the group with a brief demonstration of test-driving a function that returns the Fibonacci number for a given index. I was stumbling over myself to emphasize that the entire day's examples were not very realistic, but might at least illustrate the basic rhythm of "red-green-refactor". Later, we moved on to a walkthrough of Uncle Bob's bowling game kata. Finally, we finished the day with the attendees pairing off to implement their own Roman-to-Arabic numeral conversion function.

The next day I stood at the whiteboard and asked the class to summarize what they perceived as the benefits of TDD. Unsurprisingly (but importantly), every attendee perceived TDD as being about correctness: "code free of defects", "automated regression testing vs. manual", "changing code without fear of breaking it," etc.

When I reacted to their answers by telling the class that TDD's primary benefit is to improve the design of our code, they were caught entirely off guard. And when I told them that any regression safety gained by TDD is at best secondary and at worst illusory, they started looking over their shoulders to make sure their manager didn't hear me. This did not sound like the bill of goods they had been sold.

Instead, let's pretend that I had sold the code katas above as emblematic of my everyday routine, as opposed to what they are: trivial example exercises. What if I'd turned the students loose under the false premise that TDD as it's practiced in kata exercises would prove useful in their day jobs?

Failure #1: Encouraging Large Units

For starters, if you intend for every test to make some progress in directly solving your problem, you're going to end up with units that do more and more stuff. The first test will result in some immediate problem-solving implementation code. The second test will demand some more. The third test will complicate your design further. At no point will the act of TDD per se prompt you to improve the intrinsic design of your implementation by breaking your large unit up into smaller ones.

Fig. 2 — Consider the above. If a new requirement were introduced, most developers would feel inclined to add additional complexity to the existing unit as opposed to preemptively assuming that the new requirement should demand the creation of a new unit.

Preventing your code's design from growing into a large, sprawling mess is left as an exercise to the developer. This is why many TDD advocates call for a "heavy refactor step" after tests pass, because they recognize this workflow requires intervention on the part of the developer to step back and identify any opportunities to simplify the design.

Refactoring after each green test is gospel among TDD advocates ("red-green-refactor", after all), but in practice most developers often skip it mistakenly, because nothing about the TDD workflow inherently compels people to refactor until they've got a mess on their hands.

Some teachers deal with this problem by exhorting developers to refactor rigorously with an appeal to virtues like discipline and professionalism. That doesn't sound like much of a solution to me, however. Rather than question the professionalism of someone who's already undertaken the huge commitment to practice TDD, I'd rather question whether the design of my tools and practices are encouraging me to do the right thing at each step in my workflow.

Nevertheless, suppose that you do take the initiative to perform an extract refactor after the unit starts becoming large.

Fig. 3 — Extracting part of a unit's responsibility into a new child unit. The original test remains unchanged to assure us that the refactor didn't break anything.

Keep in mind, however, that extract refactors are generally quite painful to undertake. Extract refactors often require intense analysis and focus in order to detangle one complex parent object into one tidy child object and one now-slightly-less complex parent. Paraphrasing a conversation with Brandon Keepers, "it's easier to take two balls of yarn and tie them into a knot than it is to take a single knot of yarn and pull them into two balls."

Failure #3: Characterization Tests of Greenfield Code

Even after the refactor is completed successfully, more work remains! To ensure that every unit in your system is paired with a well-designed (I call it "symmetrical") unit test, you now have to design a new unit test that characterizes the behavior of the new child object. This is hugely problematic, because characterization tests are a tool for dealing with legacy code, and as such should never be necessary if all the code was test-driven. And yet, if we define "characterization test" as "wrapping an untested unit with tests to verify its behavior," that's exactly the situation at hand: writing tests for an already-implemented unit that has no matching unit test.

Because the new test is not written in a normal TDD rhythm, the developer runs the same risks as one would when practicing "test-after-development". Namely, because the code already exists, your characterization test can exercise each line of the new child unit without any certainty that the test demands all of the unit's behaviors. So even though you've done the extra (and laudable) work of covering the new unit, the upper bound on that test's quality is lower than if you'd test-driven that unit from scratch. That observation alone suggests the activity is wasteful.

Fig. 4 — Characterizing the new child unit's behavior with a test. We should be wary of the test's robustness, however, because it was the product of "test-after development".

Failure #4: Redundant test coverage

But now your system is plagued by yet another testing evil: redundant test coverage! Covering the same behavior in two places often feels warm and fuzzy to TDD novices, but it doesn't take long before the cost of change spirals out of control.

Suppose a new requirement comes along requiring a change to the extracted child object's behavior. Ideally, this would require exactly three changes (all of which should be readily anticipated by the developer): the integration test that verifies the feature, the unit's test to specify the change in behavior, and the unit itself. But in our redundantly-tested example, the parent unit's test also needs to change.

Worse yet, the developer implementing the change has no reason to expect the parent object's unit test will fail. That means, at best, the developer faces an unpleasant surprise when the parent's unit test breaks and extra work is subsequently required to redesign the parent's test to consider the new behavior of the child. At worst, the developer might lose sight of the fact that the test failure was a false negative caused by a course-of-business change and not a true negative indicating a bug, which could result in wasted time investigating the nature of the parent's test failure.

Fig. 5 — A change in the child object causes the parent's test to break, requiring the parent's test to be redesigned even though the parent object itself didn't change.

Imagine if the child object were used in two places—or ten! A trivial change in an oft-depended-on unit could result in hours and hours of painstaking test fixes for everything that depends on the changed unit.

Failure #5: Eliminating Redundancy Sacrifices Regression Value

If we hope to avoid the eventual pain wrought by redundant test execution, our intrepid attempt to undergo a simple extract method refactor now requires us to redesign the parent unit's test.

Recall that the parent's unit test was written with correctness and regression safety in mind, so its original author will probably not appreciate my prescription to remove the redundancy: replace the real instance of the child unit from the parent's test with a test double in its place.

Fig. 6 — The parent test with its previously-real instance of the child unit replaced with a test double

"Well now the test is worthless and doesn't actually verify anything!" the original author might argue. And because of the philosophy under which this code was originally written (that TDD is about solving problems incrementally with a side effect of total regression safety), his complaint would be completely valid. His point could be countered with, "but that unit is already tested separately," but without an additional integration test to ensure the units work correctly together, the original author's concerns aren't likely to be assuaged.

It's at this point that I've seen numerous teams reach a complete dead end, with some being "pro-mocking" and others being "anti-mocking", but with neither really understanding that this disagreement is merely a symptom of the fallacious assumptions that classical TDD encourages us to make.

Failure #6: Making a Mess with Mocks

Even though I'm usually on team "Yay mocks!", their use in a situation like this one is unfortunate. First, by replacing the child unit with a test double, the parent unit's test is going to become convoluted: part of the test will specify bits of logical behavior carried out by the parent, while other parts will specify the intended collaboration between the parent and child objects. In addition to juggling both concerns, the tester's hands will be tied in how the parent-child collaboration is specified because any stubbing will need to be made to agree with whatever logic the parent implements.

Tests that specify both logical behavior and unit collaboration like this are very difficult to read, comprehend, and change. And yet, this frightening scenario probably describes the vast majority of tests in which test doubles are used. It's no wonder I hear so many complaints of "over-mocking" in unit tests, a claim that until relatively recently befuddled me.

The solution to this mess is also a lot of work. The parent unit needs to be refactored such that it only facilitates collaboration between other units and contains no implementation logic of its own. That means the parent unit's other behaviors not implemented by the previously extracted child will also need to be extracted into new units (including all the time-consuming activities described thus far). Finally, the parent's original test should be thrown away and rewritten strictly as a "specification of collaboration", ensuring that the units interact with each other as needed. Oh, and because there's no longer a fully integrated test to make sure the parent unit works anymore, a separate integration test ought to be written.

Fig. 7 — A second child object is extracted to encapsulate any remaining behavior in the parent, the parent's test is then replaced with a test specifying the interaction of the two children.

Ouch. It takes such rigor and discipline to maintain a clean codebase, comprehensible tests, and fast build times when you take this approach that it's no wonder why few teams ultimately realize their goals with TDD.

A Successful Approach to TDD

Instead, I'd like to chart a different course by introducing a very different TDD workflow from that shown above.

First, consider the resulting artifacts of the roundabout, painful process detailed in the previous example:

  • A parent unit that depends on logical behavior implemented in two child units
  • The parent's unit test, which specifies the interaction of the two children
  • The two child units, each with a unit test specifying the logic for which they're responsible

If this is where we're bound to end up, why not head in that direction from the outset? My approach to TDD considers this, and could be described as an exercise in reductionism.

Here's my process:

(1) Pull down a new feature request that will require the system to do a dozen new things.

(2) Panic over how complex the feature seems. Question why you ever started programming in the first place.

(3) Identify an entry point for the feature and establish a public-facing contract to get started (e.g. "I'll add a controller action that returns profits for a given month and a year")

This would also be a good opportunity to encode the public contract in an integration test. This post isn't about integration testing, but I'd recommend a test that both runs in a separate process and uses the application in the same way a real user would (e.g. by sending HTTP requests). Having an integration test for regression safety from the start can help us avoid scratching that itch from our unit tests.

(4) Start writing a unit test for the entry point, but instead of immediately trying to solve the problem, intentionally defer writing any implementation logic! Instead, break down the problem by dreaming up all of the objects you wish you had at your disposal (e.g. "This controller would be simple if only it could depend on something that gave it revenue by month and on something else that gave it costs by month").

This step improves your design by encouraging small, single-purpose units by default.

(5) Implement the entry point with TDD by writing your test as if those imagined units did exist. Inject test doubles into the entry point for the dependencies you think you'll need and specify the subject's interaction with the dependencies in your test. Interaction tests specify "collaboration" units which only govern the usage of other units and contain no logic themselves.

This step can improve your design because it gives you an opportunity to discover a usable API for your new dependencies. If an interaction is hard to test, it's cheap to change a method signature because the dependency doesn't actually exist yet!

(6) Repeat steps (4) and (5) for each newly-imagined object, discovering ever-more-fine-grained collaborator objects.

Human nature seems to panic at this step ("we'll be overrun by tiny classes!"), but in practice it's manageable with good code organization. Because each object is small, understandable, and single-use, it's usually painless to delete any or all the units under an obsoleted subtree of your object graph when requirements change. (I've come to pity codebases with many large, oft-reused objects, as it's rarely feasible to delete them, even after they no longer fit their original purpose.)

(7) Eventually, reach a point where there's no conceivable way to pass the buck any further. At that point, implement the necessary bit of logic in what will become a leaf node in your object graph and recurse up the tree to tackle the next bit of work.

The goal of this game is to discover as many collaboration objects as necessary in order to define leaf nodes that implement a piece of narrowly-defined logic that your feature needs.

Tests of "logical units" exhaustively specify useful behavior and should give the author confidence that the unit is both complete and correct. Logical units' tests remain simple because there's no need to use test doubles—they merely provide various inputs and assert appropriate outputs.

I like to call this process "Fake it until you make it™", and while it's definitely based on the keen insights of GOOS, it places a fresh emphasis on reductionism. I also find value in discriminating between the responsibilities of "collaboration units" and "logical units", both for clearer tests and also for more consistent code.

Note also that there is no heavy refactor step necessary when you take this approach to TDD. Extract refactors become an exceptional case and not part of one's routine, which means all the downstream costs of extract refactors that I detailed earlier can be avoided.

Changing how we teach TDD

It took me the better part of four years to understand my frustrations with TDD well enough to articulate this post. After plenty of time wandering the wilderness and mulling over these issues, I can say I finally find TDD to be an entirely productive, happy exercise. TDD isn't worth the time investment for every endeavor, but it's an effective tool for confronting the anxiety & perceived complexity one faces when building a hopefully long-lived system.

My goal in sharing this with you is that we begin teaching others that this is what test-driven development is all about. Novices have little to gain by being put through the useless pain that results from the simplistic assumptions of classical TDD. Let's find ways to teach a more valuable TDD workflow that gives students an immediately valuable tool for breaking down confusingly large problems into manageably small ones.

sahat/hackathon-starter · GitHub

$
0
0

Comments:"sahat/hackathon-starter · GitHub"

URL:https://github.com/sahat/hackathon-starter


Hackathon Starter

A kickstarter for Node.js web applications.

Live Demo: http://hackathonstarter.herokuapp.com

If you have attended any hackathons in the past then you know how much time it takes to get a project started. Decide on an idea, pick a programming language, pick a web framework, pick a CSS framework. A while later, you will have an initial project up on GitHub, and only then can other team members start contributing. Or what about doing something as simple as OAuth 2.0 Authentication? You can spend hours on it if you are not familiar with how OAuth 2.0 works. (As a side-note, over a year ago I had no idea WTF was REST or OAuth2, or how to do a simple "Sign in with Facebook". It was a frustrating experience to say the least.)

When I started this project, my primary focus was on simplicity and ease of use. I also tried to make it as generic and reusable as possible to cover most use cases of hackathon web apps, without being too specific. In the worst case you can use this as a guide for your projects, if for example you are only interested in Sign in with Google authentication and nothing else.

Chances are, you might not need all 4 types of OAuth2 authentication methods, or all 9 API examples. Sadly, there is no step-by-step wizzard to configure the boilerplate code just for your use case. So, use what you need, simply delete what you don't need.

Flatly Bootstrap Theme

Default Theme

Features

  • Local Authentication using Email and Password
  • OAuth 2.0 Authentication via Twitter, Facebook, Google or GitHub
  • Sweet Error and Success flash notifications with animations by animate.css
  • MVC Project Structure
  • LESS stylesheets (auto-compiled via Express middleware)
  • Bootstrap 3 + Flat UI + iOS7 Theme
  • Contact Form (powered by Sendgrid)
  • Account Management
    • Profile Details
    • Change Password
    • Link multipleOAuth 2.0 strategies to one account
    • Delete Account
  • API Examples: Facebook, Foursquare, Last.fm, Tumblr, Twitter, and more.

Prerequisites

MongoDB Node.js Xcode (Mac OS X) or Visual Studio (Windows)

Note: If you are new to Node.js or Express framework, I highly recommend watching Node.js and Express 101 screencast that teaches Node and Express from scratch.

Getting Started

The easiest way to get started is to clone the repository:

# Fetch only the latest commits.
git clone --depth=1 git@github.com:sahat/hackathon-starter.git# Move the repository to your own project name.
mv hackathon-starter my-projectcd my-project# Install NPM dependencies
npm install
node app.js
Note: I strongly recommend installing nodemon sudo npm install -g nodemon. It will monitor for any changes in your node.js application and automatically restart the server. Once installed, instead of node app.js use nodemon app.js. It is a big time saver in the long run.

Next up, if you want to use any of the APIs or OAuth2 authentication methods, you will need to obtain appropriate credentials: Client ID, Client Secret, API Key, or Username & Password. You will need to go through each provider to generate new credentials.

Obtaining API Keys

  • Visit Google Cloud Console
  • Click CREATE PROJECT button
  • Enter Project Name, then click CREATE
  • Then select APIs & auth from the sidebar and click on Credentials tab
  • Click CREATE NEW CLIENT ID button
    • Application Type: Web Application
    • Authorized Javascript origins: http://localhost:3000
    • Authorized redirect URI: http://localhost:3000/auth/google/callback
  • Copy and paste Client ID and Client secret keys into config/secrets.js
Note: When you ready to deploy to production don't forget to add your new url to Authorized Javascript origins and Authorized redirect URI, e.g. http://my-awesome-app.herokuapp.com and http://my-awesome-app.herokuapp.com/auth/google/callback respectively.
  • Visit Facebook Developers
  • Click Apps > Create a New App in the navigation bar
  • Enter Display Name, then choose a category, then click Create app
  • Copy and paste App ID and App Secret keys into config/secrets.js
    • App ID is clientID, App Secret is clientSecret
  • Click on Settings on the sidebar, then click + Add Platform
  • Select Website
  • Enter http://localhost:3000 for Site URL

TODO: Add Twitter and GitHub instructions.

Project Structure

Name Description config/passport.js Passport Local and OAuth strategies + Passport middleware. config/secrets.js Your API keys, tokens, passwords and database URL. controllers/api.js Controller for /api route and all api examples. controllers/contact.js Controller for contact form. controllers/home.js Controller for home page (index). controllers/user.js Controller for user account management page. models/User.js Mongoose schema and model for User. public/* Static assets, i.e. fonts, css, js, img. views/account Templates relating to user account. views/api Templates relating to API Examples. views/layout.jade Base template. views/home.jade Home page template.

Note: There is no difference how you name or structure your views. You could place all your templates in a top-level views directory without having a nested folder structure, if that makes things easier for you. Just don't forget to update extends ../layout and corresponding res.render() method in controllers. For smaller apps, I find having a flat folder structure to be easier to work with.

Note 2: Although your main template - layout.jade only knows about /css/styles.css file, you should be editing styles.less stylesheet. Express will automatically generate styles.css whenever there are changes in LESS file. This is done via less-middleware node.js library.

Useful Tools

HTML to Jade converter

Recommended Design

Recommended Node.js Libraries

  • nodemon - automatically restart node.js server on code change.
  • geoip-lite - get location name from IP address.
  • email.js - send emails with node.js (without sendgrid or mailgun).
  • filesize.js - make file size pretty, e.g. filesize(265318); // "265.32 kB".
  • Numeral.js - a javascript library for formatting and manipulating numbers.

Recommended Client-Side libraries

  • Hover - Awesome css3 animations on mouse hover.
  • platform.js - Get client's operating system name, version, and other useful information.
  • iCheck - Custom nice looking radio and check boxes.
  • Magnific Popup - Responsive jQuery Lightbox Plugin.
  • jQuery Raty - Star Rating Plugin.
  • Headroom.js - Hide your header until you need it.
  • Fotorama - Very nice jQuery gallery.
  • X-editable - Edit form elements inline.
  • Offline.js - Detect when user's internet connection goes offline.
  • Color Thief - Grabs the dominant color or a representative color palette from an image.
  • Alertify.js - Sweet looking alerts and browser dialogs.

FAQ

Why Jade and not Handlebars template engine?

When I first created this project I didn't have any experience with Handlebars. Since then I have worked on Ember.js apps and got myself familiar with the Handlebars syntax. While it is true Handlebars is easier, because it looks like good old HTML, I have no regrets picking Jade over Handlebars. First off, it's the default template engine in Express, so someone who has built Express apps in the past already knows it. Secondly, I find extends and block to be indispensable, which as far as I know, Handlebars does not have out of the box. And lastly, subjectively speaking, Jade looks much cleaner and shorter than Handlebars, or any non-HAML style for that matter.

Why do you have all routes in app.js?

For the sake of simplicity. While there might be a better approach, such as passing app context to each controller as outlined in this blog, I find such style to be confusing for beginners. It took me a long time to grasp the concept of exports and module.exports, let alone having a global app reference in other files. That to me is a backward thinking. The app.js is the "center of the universe", it should be the one referencing models, routes, controllers, etc. When working solo I actually prefer to have everything in app.js as is the case with this REST API server for ember-sass-express-starter's app.js file. That makes things so much simpler!

TODO

  • Pages that require login, should automatically redirect to last attempted URL on successful sign-in.
  • Add more API examples.
  • Mocha tests.

Contributing

If something is unclear, confusing, or needs to be refactored, please let me know. Pull requests are always welcome, but due to the opinionated nature of this project, I cannot accept every pull request. Please open an issue before submitting a pull request.

License

The MIT License (MIT)

Copyright (c) 2014 Sahat Yalkabov

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Android++

What is really happening in Ukraine

$
0
0

Comments:"What is really happening in Ukraine"

URL:http://www.slideshare.net/NazarBartosik/what-is-really-happening-in-ukraine


Featured in : Social Media, Business & Mgmt

The horrible truth about what really happens during the revolution in Ukraine on January, 2014. ...

More…

The horrible truth about what really happens during the revolution in Ukraine on January, 2014.

Outside world usually doesn't realise how cynical and amoral are the people who rule the country and who have led the quiet european country to massive protests, revolution and deaths. Journalists are just shot like dogs to not let truth go abroad.

Only part of the terror is presented here. But you can see the way things are ruled out in Ukraine by Yanukovych and his close surroundings.

Statistics

Less


Random Street View - images from all over the world.

$
0
0

Comments:"Random Street View - images from all over the world."

URL:http://randomstreetview.com


All Countries - - - - - - -AustraliaBelgiumBulgariaBrazilBotswanaCanadaChileCroatiaColombiaCzech RepublicDenmarkEstoniaFinlandFranceGermanyHungaryIcelandIrelandItalyIsraelJapanLatviaLithuaniaMexicoNetherlandsNew ZealandNorwayPeruPolandPortugalRomaniaRussiaSlovakiaSouth AfricaSouth KoreaSpainSwazilandSwedenSwitzerlandTaiwanUkraineUnited KingdomUnited States

map view

« previous

Next

»

random location

UrtheCast Spacewalk EVA January 2014

Digging out the craziest bug you never heard about from 2008: a linux threading regression at time to bleed by Joe Damato

If You Used This Secure Webmail Site, the FBI Has Your Inbox | Threat Level | Wired.com

$
0
0

Comments:"If You Used This Secure Webmail Site, the FBI Has Your Inbox | Threat Level | Wired.com"

URL:http://www.wired.com/threatlevel/2014/01/tormail/


While investigating a hosting company known for sheltering child porn last year the FBI incidentally seized the entire e-mail database of a popular anonymous webmail service called TorMail.

Now the FBI is tapping that vast trove of e-mail in unrelated investigations.

The bureau’s data windfall, seized from a company called Freedom Hosting, surfaced in court papers last week when prosecutors indicted a Florida man for allegedly selling counterfeit credit cards online. The filings show the FBI built its case in part by executing a search warrant on a Gmail account used by the counterfeiters, where they found that orders for forged cards were being sent to a TorMail e-mail account: “platplus@tormail.net.”

Acting on that lead in September, the FBI obtained a search warrant for the TorMail account, and then accessed it from the bureau’s own copy of “data and information from the TorMail e-mail server, including the content of TorMail e-mail accounts,” according to the complaint (.pdf) sworn out by U.S. Postal Inspector Eric Malecki.

The tactic suggests the FBI is adapting to the age of big-data with an NSA-style collect-everything approach, gathering information into a virtual lock box, and leaving it there until it can obtain specific authority to tap it later. There’s no indication that the FBI searched the trove for incriminating evidence before getting a warrant. But now that it has a copy of TorMail’s servers, the bureau can execute endless search warrants on a mail service that once boasted of being immune to spying.

“We have no information to give you or to respond to any subpoenas or court orders,” read TorMail’s homepage. “Do not bother contacting us for information on, or to view the contents of a TorMail user inbox, you will be ignored.”

In another e-mail case, the FBI last year won a court order compelling secure e-mail provider Lavabit to turn over the master encryption keys for its website, which would have given agents the technical ability to spy on all of Lavabit’s 400,000 users – though the government said it was interested only in one. (Rather than comply, Lavabit shut down and is appealing the surveillance order).

TorMail was the webmail provider of choice for denizens of the so-called Darknet of anonymous and encrypted websites and services, making the FBI’s cache extraordinarily valuable. The affair also sheds a little more light on the already-strange story of the FBI’s broad attack on Freedom Hosting, once a key service provider for untraceable websites.

Freedom Hosting specialized in providing turnkey “Tor hidden service” sites — special sites, with addresses ending in .onion, that hide their geographic location behind layers of routing, and can be reached only over the Tor anonymity network. Tor hidden services are used by those seeking to evade surveillance or protect users’ privacy to an extraordinary degree – human rights groups and journalists as well as serious criminal elements.

By some estimates, Freedom Hosting backstopped fully half of all hidden services at the time it was shut down last year — TorMail among them. But it had a reputation for tolerating child pornography on its servers. In July, the FBI moved on the company and had the alleged operator, Eric Eoin Marques, arrested at his home in Ireland. The U.S. is now seeking his extradition for allegedly facilitating child porn on a massive scale; hearings are set to begin in Dublin this week.

According to the new document, the FBI obtained the data belonging to Freedom Hosting’s customers through a Mutual Legal Assistance request to France – where the company leased its servers – between July 22, 2013 and August 2 of last year.

That’s two days before all the sites hosted by Freedom Hosting , including TorMail, began serving an error message with hidden code embedded in the page, on August 4.

Security researchers dissected the code and found it exploited a security hole in Firefox to de-anonymize users with slightly outdated versions of Tor Browser Bundle, reporting back to a mysterious server in Northern Virginia. Though the FBI hasn’t commented (and declined to speak for this story), the malware’s behavior was consistent with the FBI’s spyware deployments, now known as a “Network Investigative Technique.”

No mass deployment of the FBI’s malware had ever before been spotted in the wild.

The attack through TorMail alarmed many in the Darknet, including the underground’s most notorious figure — Dread Pirate Roberts, the operator of the Silk Road drug forum, who took the unusual step of posting a warning on the Silk Road homepage. An analysis he wrote on the associated forum now seems prescient.

“I know that MANY people, vendors included, used TorMail,” he wrote. “You must think back through your TorMail usage and assume everything you wrote there and didn’t encrypt can be read by law enforcement at this point and take action accordingly. I personally did not use the service for anything important, and hopefully neither did any of you.” Two months later the FBI arrested San Francisco man Ross William Ulbricht as the alleged Silk Road operator.

The connection, if any, between the FBI obtaining Freedom Hosting’s data and apparently launching the malware campaign through TorMail and the other sites isn’t spelled out in the new document. The bureau could have had the cooperation of the French hosting company that Marques leased his servers from. Or it might have set up its own Tor hidden services using the private keys obtained from the seizure, which would allow it to adopt the same .onion addresses used by the original sites.

The French company also hasn’t been identified. But France’s largest hosting company, OVH, announced on July 29, in the middle of the FBI’s then-secret Freedom Hosting seizure, that it would no longer allow Tor software on its servers. A spokesman for the company says he can’t comment on specific cases, and declined to say whether Freedom Hosting was a customer.

“Wherever the data center is located, we conduct our activities in conformity with applicable laws, and as a hosting company, we obey search warrants or disclosure orders,” OVH spokesman Benjamin Bongoat told WIRED. “This is all we can say as we usually don’t make any comments on hot topics.”

(Hat-Tip: Brian Krebs)

King Candy Crushes Developers, The Saga

$
0
0

Comments:"King Candy Crushes Developers, The Saga"

URL:http://www.gemfruit.com/articles/king-candy-crushes-developers-saga/#


King Candy Crushes Developers, The Saga

There’s a lot of buzz going around the internet concerning King’s trademarking of the words “Candy” and “Saga”, and that buzz just got louder. Yesterday, I was called out of the blue by writers for VentureBeat and GameInformer concerning an event that I had long forgotten about. The story that resulted from these calls is spreading across the internet like wildfire, appearing on sites such as Forbes, VentureBeat, Game Informer, and more, so I thought it would be best to give my complete side of the story.

Travel Back To 2009

Back in 2009, I was still a member of Epic Shadow and was in a pretty sticky situation. I had nearly exhausted my funds saved up from selling a primary sponsorship of Tower of Greed to King, and had a very shitty living situation – I needed to move and fast. During this time, I was an extremely active member of FlashGameLicense (now known as FGL) and had regular contact with Lars Jörnow who was  games acquisition manager for King at the time.

One day, Lars messaged us and asked us if we wanted a small job. He then told us that he was working with another developer to secure a sponsorship for the game Scamper Ghost and that the developer had backed out of the deal. King wasn’t too pleased with that, and so Lars requested that we clone the game for them. I had a good working relationship with King then and was quite upset that someone would break the FGL terms and conditions. I initially thought the job was a little immoral, and a bit sketchy, but we had worked with King before, talked regularly, and Lars made these other developers seem like some really unprofessional jerks. Lars requested that we build the game quickly and explained that it would be optimal if we could beat the original game to market. Between needing the money, and Lars painting the developer’s of Scamper Ghost as the bad guys, we took the job.

Webuilt the game from scratch (many have wondered if we stole art assets or code, we did not) using Flash and the Box2d physics library. It was my first time ever using it and I had quite a bit of fun developing the game. We got so into development, that we decided to add a fourth enemy to the game (not seen in Scamper Ghost) and then told King that he had to increase his price because the amount of work/polish we had put in was worth more than his initial offer. He agreed and requested that we name the game “Pac Avoid”, as King felt that would be best for marketing the game. We thought the name was stupid (both because it didn’t sound good, and because it ripped off Pac Man, which the game has little to do with), but we went with it anyway. Our only additional term to the deal, was that the Epic Shadow branding not be placed in the game, as we found the entire project to be sketchy and we wanted nothing to do with it post-release. This essentially changed the deal from a primary sponsorship to an exclusive (an increased value for King), which they gladly accepted.

Once the game was released, there was obviously a lot of outrage from the Scamper Ghost team, as their game had obviously been ripped off. They did a bit of detective work, and quickly found out that I was one of the developers of Pac Avoid. I didn’t deny my involvement in the project, and we exchanged a number of emails concerning the matter. In the end, the Scamper Ghost team had ample evidence that we were indeed contracted to clone the game, and that we were mislead to believe they had made some very unethical business decisions to pull away from King and go with Max games. They were still pissed at the entire situation, but the overall conclusion was that we were forgiven, and King was to blame.

Fast Forward To 2014

Now that we know the back-story, let’s take a look at why I was called yesterday. It turns out, Matthew Cox (developer of Scamper Ghost, the game we cloned), publicly called out King as a hypocrite on his personal site. King has been working on  trademarking the word“Candy” and “Saga” and has allegedly hassled a number of smaller indie developers who have those words in their games (The Banner Saga and All Candy Casino Slots). King claims it’s an attempt to protect Candy Crush Saga and its assets, while others feel that using the word “Candy” or “Saga”, especially not together, is is no way threatening. The accusation by Mathew Cox has spread like wildfire across the internet and King is currently facing what most companies would consider a PR nightmare.

Turning The Blame

Fast forward another 24 hours and a new story has emerged as King has now removed Pac Avoid from its site (and many others from what my searches turned up). I currently don’t have the original source to the game from 2009 (though I’m attempting to retrieve it), though I did manage to snag a copy of the SWF from MiniJuegos. Not only has King removed Pac Avoid from their site, but it has also shifted blame of the cloning of Scamper Ghost to me by claiming they do a “thorough search of other games in the marketplace”, which in this case, is an obvious lie. Take a look at the following:

“King does not clone other peoples’ games,” a spokesperson for the company told GamesBeat. “King believes that [intellectual property] — both our own IP and that of others — is important and should be properly protected. Like any prudent company, we take all appropriate steps to protect our IP in a sensible and fair way. At the same time, we are respectful of the rights and IP of other developers.

“Before we launch any game, we do a thorough search of other games in the marketplace, as well as a review of trademark filings, to ensure that we are not infringing anyone else’s IP. However, for the avoidance of doubt, in this case, this game — which was coded by a third-party developer five years ago — has been taken down.” - Read the full second article on VentureBeat.

Hopefully you can see the extreme contradictions between what was supposedly said at this staff conference at King, and the press release they offered yesterday. The claim that “King had all this new ideas for the game, which they knew were good, but no game to implement it in” is a complete lie, and no further input or instructions were given beyond “clone the game”. I was originally going to take a decently neutral stance on this entire matter (with more of a focus on the core issue of trademark laws / cloning), but now I’m quite irked that King has the nerve to blatantly lie and shift the blame to me. Mathew Cox has personally let me know he doesn’t blame me for the incident, both in 2009 and yesterday, and I plan to make that known.

The Real Issue

What keeps getting left out, is why Matthew Cox came forward with the Scamper Ghost / Pac Avoid story in the first place. The point of all of this was to prove how hypocritical and ridiculous King is being by attempting to trademark common words. Words such as “Warrior”, “Quest”, “Saga”, “Deluxe”, and many more, have been used for years in games of various (and sometimes similar!) genres. Should Square have tried to duke it out with Capcom because Final Fight was too close to Final Fantasy? Or perhaps Enix should have attacked Capcom due to Micky’s Magical Quest using “Quest”, clearly stepping too close to the Dragon Quest series? The obvious answer is no, because it would have been as ridiculous of an act then as it is now, except now, money bags / powerhouse King is doing just that. King claims that “Saga” is an important word that they’ve used to differentiate their games, and I get that claim, but perhaps they should be a little more original the next time they come up with a unique word / phrase to stand out.

My Closing Thoughts

I find it pathetic that a company such as King would throw the blame around in this situation while hypocritically attacking others. Trademarking common words such as “Candy” is just ridiculous. Bullying indie developers is even worse. The company is sitting on billions of dollars and everyone already knows about Candy Crush; I don’t think they need to worry about getting ripped off, especially not by the people they’re targeting. Based on their response to the recent allegations, I now know that the company is both deceitful and hypocritical. I was contracted to make Pac Avoid, a direct clone of Scamper Ghost, and I did just that – why King would try to lie about the obvious proven truth is beyond me. I understand that they have a lot to lose by admitting to something from so long ago, but the truth is clear, and they’re just digging a deeper hole by lying about it.

The entire incident in 2009 isn’t the most shining moment of my game development career, but it is what it is. King was a business contact, I accepted a contract job, and I was apparently lied to in order to feel morally justified enough to clone and blatantly rip off another developer. Even then, I left my branding out, as the entire situation felt shady. What’s worse, is that I was only 20 years old when this happened, King is a mature company, presumably run by mature adults. I’ve apologized to Matthew Cox and his partner in the past and I do so again now – I’m sorry I took on such an immoral contract job and I’ve learned from my past, it’s a shame King can’t fess up and do the same.

Viewing all 9433 articles
Browse latest View live