Comments:"Streetpong"
Streetpong
nemasu/asmttpd · GitHub
Comments:"nemasu/asmttpd · GitHub"
URL:https://github.com/nemasu/asmttpd
asmttpd
Web server for Linux written in amd64 assembly.
Note: This is very much a work in progress and not ready for production.
Features:
- Multi-threaded via a thread pool
- No libraries required ( only 64-bit Linux )
- Fast
What works:
- Serving files from specified document root.
- 200
- 206
- 404
- Content-types: xml, html, xhtml, gif, png, jpeg, css, js, and octet-stream.
Planned Features:
- Directory listing.
- HEAD support.
Current Limitations / Known Issues
- 206 Partial Content body is limited to ~10MB, make another request to continue.
- Implementation of proper HTTP error codes.
Installation
Run make
or make release
for non-debug version.
You will need yasm
installed.
Usage
Run sudo ./asmttpd web_root/ in your shell. Visit http://localhost/index.html in your browser.Changes
2014-02-03 : asmttpd - 0.04
- 200 now streams full amount
2014-02-01 : asmttpd - 0.03
- Files are split if too large to fit into buffer.
- Added 206 responses with Content-Range handling
2014-01-30 : asmttpd - 0.02
- Added xml, xhtml, gif, png, jpeg, css, and javascript content types.
- Changed thread memory size to something reasonable. You can tweak it according to available memory. See comments in main.asm
- Added simple request logging.
- Added removal of '../' in URL.
CCC | Chaos Computer Club files criminal complaint against the German Government
Comments:"CCC | Chaos Computer Club files criminal complaint against the German Government"
URL:http://www.ccc.de/en/updates/2014/complaint
On Monday, the Chaos Computer Club (CCC) and the International League for Human Rights (ILMR), have filed a criminal complaint with the Federal Prosecutor General's office. The complaint is directed against the German federal government, the presidents of the German secret services, namely Bundesnachrichtendienst, Militärischer Abschirmdienst, Bundesamt für Verfassungschutz, and others. We accuse US, British and German secret agents, their supervisors, the German Minister of the Interior as well as the German Chancelor of illegal and prohibited covert intelligence activities, of aiding and abetting of those activities, of violation of the right to privacy and obstruction of justice in office by bearing and cooperating with the electronic surveillance of German citizens by NSA and GCHQ.
After months of press releases about mass surveillance by secret services and offensive attacks on information technology systems, we now have certainty that German and other countries' secret services have violated the German criminal law. With this criminal complaint, we hope to finally initiate investigations by the Federal Prosecutor General against the German government. The CCC has learned with certainty that the leaders of the secret services and the federal government have aided and abetted the commission of these crimes.
It is the understanding of the CCC that these crimes are felonies persuant to German federal laws, specifically 99 StGB (illegal activity as a foreign spy), §§ 201 ff. StGB (violation of privacy) and § 258 StGB (obstruction of justice).
"Every citizen is affected by the massive surveillance of their private communications. Our laws protect us and threatens those responsible for such surveillance with punishment. Therefore an investigation by the Federal Prosecutor General is necessary and mandatory by law – and a matter of course. It is unfortunate that those responsible and the circumstances of their crimes have not been investigated," says Dr. Julius Mittenzwei, attorney and long time member of the CCC.
It is unacceptable that the public offices have not helped in the investigation of these crimes even if the spying is widely visible, for example, in the so called Dagger-Complex and on the August Euler airfield near Griesheim. Together with the International League of Human Rights and digitalcourage e. V., we want to bring to light more information about the illegal activities of German and foreign secret services and intend to bring the offenders of those crimes to accord.
In the criminal complaint, we ask to hear technical expert and whistleblower Edward Snowden as a witness, and that he be provided safe passage and protection against extradition to the US.
We do not only want to call the Federal Prosecutor General's office to investigations but also ask you to get involved and also file a criminal complaint. The text of the complaint is transmitted on demand.
Contact:
H.-Eberhard Schultz and Claus Förster, solicitors
Haus der Demokratie und Menschenrechte, Greifswalder Str. 4, 10405 Berlin
Tel: 030 43725026, Fax: 030 43725027, cf(at)cfoerster.de
Links:
Internationale Liga für Menschenrechte e. V.: http://ilmr.de/
Goran Kukurin - Google+ - Hello everyone! I wanted to share with you my story, maybe…
A re-introduction to JavaScript (JS Tutorial) - JavaScript | MDN
Comments:"A re-introduction to JavaScript (JS Tutorial) - JavaScript | MDN"
URL:https://developer.mozilla.org/en-US/docs/Web/JavaScript/A_re-introduction_to_JavaScript
Introduction
Why a re-introduction? Because JavaScript has a reasonable claim to being the world's most misunderstood programming language. While often derided as a toy, beneath its deceptive simplicity lie some powerful language features. 2005 saw the launch of a number of high-profile JavaScript applications, showing that deeper knowledge of this technology is an important skill for any web developer.
It's useful to start with an idea of the language's history. JavaScript was created in 1995 by Brendan Eich, an engineer at Netscape, and first released with Netscape 2 early in 1996. It was originally going to be called LiveScript, but was renamed in an ill-fated marketing decision to try to capitalize on the popularity of Sun Microsystem's Java language — despite the two having very little in common. This has been a source of confusion ever since.
Microsoft released a mostly-compatible version of the language called JScript with IE 3 several months later. Netscape submitted the language to Ecma International, a European standards organization, which resulted in the first edition of the ECMAScript standard in 1997. The standard received a significant update as ECMAScript edition 3 in 1999, and has stayed pretty much stable ever since. The fourth edition was abandoned, due to political differences concerning language complexity. Many parts of the fourth edition formed a basis of the new ECMAScript edition 5, published in December of 2009.
This stability is great news for developers, as it's given the various implementations plenty of time to catch up. I'm going to focus almost exclusively on the edition 3 dialect. For familiarity, I will stick with the term JavaScript throughout.
Unlike most programming languages, the JavaScript language has no concept of input or output. It is designed to run as a scripting language in a host environment, and it is up to the host environment to provide mechanisms for communicating with the outside world. The most common host environment is the browser, but JavaScript interpreters can also be found in Adobe Acrobat, Photoshop, SVG images, Yahoo!'s Widget engine, as well as server side environments such as node.js. However the list of the areas where JavaScript is used just begins here. It also includes NoSQL databases, like the open source Apache CouchDB, embedded computers, or complete desktop environments, like GNOME (one of the most popular GUIs for GNU/Linux operating systems).
Overview
JavaScript is an object oriented dynamic language; it has types and operators, core objects, and methods. Its syntax comes from the Java and C languages, so many structures from those languages apply to JavaScript as well. One of the key differences is that JavaScript does not have classes; instead, the class functionality is accomplished by object prototypes. The other main difference is that functions are objects, giving functions the capacity to hold executable code and be passed around like any other object.
Let's start off by looking at the building block of any language: the types. JavaScript programs manipulate values, and those values all belong to a type. JavaScript's types are:
... oh, and Undefined and Null, which are slightly odd. And Arrays, which are a special kind of object. And Dates and Regular Expressions, which are objects that you get for free. And to be technically accurate, functions are just a special type of object. So the type diagram looks more like this:
- Number
- String
- Boolean
- Object
- Function
- Array
- Date
- RegExp
- Null
- Undefined
And there are some built in Error types as well. Things are a lot easier if we stick with the first diagram, though.
Numbers
Numbers in JavaScript are "double-precision 64-bit format IEEE 754 values", according to the spec. This has some interesting consequences. There's no such thing as an integer in JavaScript, so you have to be a little careful with your arithmetic if you're used to math in C or Java. Watch out for stuff like:
0.1 + 0.2 == 0.30000000000000004
In practice, integer values are treated as 32-bit ints (and are stored that way in some browser implementations), which can be important for bit-wise operations. For details, see The Complete JavaScript Number Reference.
The standard numeric operators are supported, including addition, subtraction, modulus (or remainder) arithmetic and so forth. There's also a built-in object that I forgot to mention earlier called Math to handle more advanced mathematical functions and constants:
Math.sin(3.5); var d = Math.PI * r * r;
You can convert a string to an integer using the built-in parseInt()
function. This takes the base for the conversion as an optional second argument, which you should always provide:
> parseInt("123", 10) 123> parseInt("010", 10) 10
If you don't provide the base, you can get surprising results in older browsers (pre-2013):
> parseInt("010") 8
That happened because the parseInt
function decided to treat the string as octal due to the leading 0.
If you want to convert a binary number to an integer, just change the base:
> parseInt("11", 2) 3
Similarly, you can parse floating point numbers using the built-in parseFloat()
function which uses base 10 always unlike its parseInt()
cousin.
You can also use the unary +
operator to convert values to numbers:
> + "42" 42
A special value called NaN
(short for "Not a Number") is returned if the string is non-numeric:
> parseInt("hello", 10) NaN
NaN
is toxic: if you provide it as an input to any mathematical operation the result will also be NaN
:
> NaN + 5 NaN
You can test for NaN
using the built-in isNaN()
function:
> isNaN(NaN) true
JavaScript also has the special values Infinity
and -Infinity
:
> 1 / 0 Infinity> -1 / 0 -Infinity
You can test for Infinity
, -Infinity
and NaN
values using the built-in isFinite()
function:
> isFinite(1/0) false> isFinite(-Infinity) false> isFinite(NaN) false
parseInt()
and parseFloat()
functions parse a string until they reach a character that isn't valid for the specified number format, then return the number parsed up to that point. However the "+" operator simply converts the string to NaN
if there is any invalid character in it. Just try parsing the string "10.2abc" with each method by yourself in the console and you'll understand the differences better.Strings
Strings in JavaScript are sequences of characters. More accurately, they're sequences of Unicode characters, with each character represented by a 16-bit number. This should be welcome news to anyone who has had to deal with internationalisation.
If you want to represent a single character, you just use a string of length 1.
To find the length of a string, access its length
property:
> "hello".length 5
There's our first brush with JavaScript objects! Did I mention that you can use strings like objects too? They have methods as well:
> "hello".charAt(0) h> "hello, world".replace("hello", "goodbye") goodbye, world> "hello".toUpperCase() HELLO
Other types
JavaScript distinguishes between null
, which is a value that indicates a deliberate non-value, and undefined
, which is a value of type 'undefined' that indicates an uninitialized value — that is, a value hasn't even been assigned yet. We'll talk about variables later, but in JavaScript it is possible to declare a variable without assigning a value to it. If you do this, the variable's type is undefined
.
JavaScript has a boolean type, with possible values true
and false
(both of which are keywords). Any value can be converted to a boolean according to the following rules:
You can perform this conversion explicitly using the Boolean()
function:
> Boolean("") false> Boolean(234) true
However, this is rarely necessary, as JavaScript will silently perform this conversion when it expects a boolean, such as in an if
statement (see below). For this reason, we sometimes speak simply of "true values" and "false values," meaning values that become true
and false
, respectively, when converted to booleans. Alternatively, such values can be called "truthy" and "falsy", respectively.
Boolean operations such as &&
(logical and), ||
(logical or), and !
(logical not) are supported; see below.
Variables
New variables in JavaScript are declared using the var
keyword:
var a; var name = "simon";
If you declare a variable without assigning any value to it, its type is undefined
.
An important difference from other languages like Java is that in JavaScript, blocks do not have scope; only functions have scope. So if a variable is defined using var
in a compound statement (for example inside an if
control structure), it will be visible to the entire function.
Operators
JavaScript's numeric operators are +
, -
, *
, /
and %
- which is the remainder operator. Values are assigned using =
, and there are also compound assignment statements such as +=
and -=
. These extend out to x = x operator y
.
x += 5 x = x + 5
You can use ++
and --
to increment and decrement respectively. These can be used as prefix or postfix operators.
The +
operator also does string concatenation:
> "hello" + " world" hello world
If you add a string to a number (or other value) everything is converted in to a string first. This might catch you out:
> "3" + 4 + 5 345> 3 + 4 + "5" 75
Adding an empty string to something is a useful way of converting it.
Comparisons in JavaScript can be made using <
, >
, <=
and >=
. These work for both strings and numbers. Equality is a little less straightforward. The double-equals operator performs type coercion if you give it different types, with sometimes interesting results:
> "dog" == "dog" true> 1 == true true
To avoid type coercion, use the triple-equals operator:
> 1 === true false> true === true true
There are also !=
and !==
operators.
JavaScript also has bitwise operations. If you want to use them, they're there.
Control structures
JavaScript has a similar set of control structures to other languages in the C family. Conditional statements are supported by if
and else
; you can chain them together if you like:
var name = "kittens"; if (name == "puppies") { name += "!"; } else if (name == "kittens") { name += "!!"; } else { name = "!" + name; } name == "kittens!!"
JavaScript has while
loops and do-while
loops. The first is good for basic looping; the second for loops where you wish to ensure that the body of the loop is executed at least once:
while (true) { // an infinite loop! } var input; do { input = get_input(); } while (inputIsNotValid(input))
JavaScript's for
loop is the same as that in C and Java: it lets you provide the control information for your loop on a single line.
for (var i = 0; i < 5; i++) { // Will execute 5 times }
The &&
and ||
operators use short-circuit logic, which means whether they will execute their second operand is dependent on the first. This is useful for checking for null objects before accessing their attributes:
var name = o && o.getName();
Or for setting default values:
var name = otherName || "default";
JavaScript has a ternary operator for conditional expressions:
var allowed = (age > 18) ? "yes" : "no";
The switch statement can be used for multiple branches based on a number or string:
switch(action) { case 'draw': drawit(); break; case 'eat': eatit(); break; default: donothing(); }
If you don't add a break
statement, execution will "fall through" to the next level. This is very rarely what you want — in fact it's worth specifically labelling deliberate fallthrough with a comment if you really meant it to aid debugging:
switch(a) { case 1: // fallthrough case 2: eatit(); break; default: donothing(); }
The default clause is optional. You can have expressions in both the switch part and the cases if you like; comparisons take place between the two using the ===
operator:
switch(1 + 3) { case 2 + 2: yay(); break; default: neverhappens(); }
Objects
JavaScript objects can be thought of as simple collections of name-value pairs. As such, they are similar to:
- Dictionaries in Python
- Hashes in Perl and Ruby
- Hash tables in C and C++
- HashMaps in Java
- Associative arrays in PHP
The fact that this data structure is so widely used is a testament to its versatility. Since everything (bar core types) in JavaScript is an object, any JavaScript program naturally involves a great deal of hash table lookups. It's a good thing they're so fast!
The "name" part is a JavaScript string, while the value can be any JavaScript value — including more objects. This allows you to build data structures of arbitrary complexity.
There are two basic ways to create an empty object:
var obj = new Object();
And:
var obj = {};
These are semantically equivalent; the second is called object literal syntax, and is more convenient. This syntax is also the core of JSON format and should be preferred at all times.
Once created, an object's properties can again be accessed in one of two ways:
obj.name = "Simon"; var name = obj.name;
And...
obj["name"] = "Simon"; var name = obj["name"];
These are also semantically equivalent. The second method has the advantage that the name of the property is provided as a string, which means it can be calculated at run-time though using this method prevents some JavaScript engine and minifier optimizations being applied. It can also be used to set and get properties with names that are reserved words:
obj.for = "Simon"; // Syntax error, because 'for' is a reserved word obj["for"] = "Simon"; // works fine
Object literal syntax can be used to initialise an object in its entirety:
var obj = { name: "Carrot", "for": "Max", details: { color: "orange", size: 12 } }
Attribute access can be chained together:
> obj.details.color orange> obj["details"]["size"] 12
Arrays
Arrays in JavaScript are actually a special type of object. They work very much like regular objects (numerical properties can naturally be accessed only using [] syntax) but they have one magic property called 'length
'. This is always one more than the highest index in the array.
The old way of creating arrays is as follows:
> var a = new Array();> a[0] = "dog";> a[1] = "cat";> a[2] = "hen";> a.length 3
A more convenient notation is to use an array literal:
> var a = ["dog", "cat", "hen"];> a.length 3
Leaving a trailing comma at the end of an array literal is inconsistent across browsers, so don't do it.
Note that array.length
isn't necessarily the number of items in the array. Consider the following:
> var a = ["dog", "cat", "hen"];> a[100] = "fox";> a.length 101
Remember — the length of the array is one more than the highest index.
If you query a non-existent array index, you get undefined
:
> typeof a[90] undefined
If you take the above into account, you can iterate over an array using the following:
for (var i = 0; i < a.length; i++) { // Do something with a[i] }
This is slightly inefficient as you are looking up the length property once every loop. An improvement is this:
for (var i = 0, len = a.length; i < len; i++) { // Do something with a[i] }
An even nicer idiom is:
for (var i = 0, item; item = a[i++];) { // Do something with item }
Here we are setting up two variables. The assignment in the middle part of the for
loop is also tested for truthfulness — if it succeeds, the loop continues. Since i
is incremented each time, items from the array will be assigned to item in sequential order. The loop stops when a "falsy" item is found (such as undefined
).
Note that this trick should only be used for arrays which you know do not contain "falsy" values (arrays of objects or DOM nodes for example). If you are iterating over numeric data that might include a 0 or string data that might include the empty string you should use the i, len
idiom instead.
Another way to iterate is to use the for...in
loop. Note that if someone added new properties to Array.prototype
, they will also be iterated over by this loop:
for (var i in a) { // Do something with a[i] }
If you want to append an item to an array, the safest way to do it is like this:
a[a.length] = item; // same as a.push(item);
Since a.length
is one more than the highest index, you can be assured that you are assigning to an empty position at the end of the array.
Arrays come with a number of methods:
Functions
Along with objects, functions are the core component in understanding JavaScript. The most basic function couldn't be much simpler:
function add(x, y) { var total = x + y; return total; }
This demonstrates everything there is to know about basic functions. A JavaScript function can take 0 or more named parameters. The function body can contain as many statements as you like, and can declare its own variables which are local to that function. The return
statement can be used to return a value at any time, terminating the function. If no return statement is used (or an empty return with no value), JavaScript returns undefined
.
The named parameters turn out to be more like guidelines than anything else. You can call a function without passing the parameters it expects, in which case they will be set to undefined
.
> add() NaN // You can't perform addition on undefined
You can also pass in more arguments than the function is expecting:
> add(2, 3, 4) 5 // added the first two; 4 was ignored
That may seem a little silly, but functions have access to an additional variable inside their body called arguments
, which is an array-like object holding all of the values passed to the function. Let's re-write the add function to take as many values as we want:
function add() { var sum = 0; for (var i = 0, j = arguments.length; i < j; i++) { sum += arguments[i]; } return sum; }> add(2, 3, 4, 5) 14
That's really not any more useful than writing 2 + 3 + 4 + 5
though. Let's create an averaging function:
function avg() { var sum = 0; for (var i = 0, j = arguments.length; i < j; i++) { sum += arguments[i]; } return sum / arguments.length; }> avg(2, 3, 4, 5) 3.5
This is pretty useful, but introduces a new problem. The avg()
function takes a comma separated list of arguments — but what if you want to find the average of an array? You could just rewrite the function as follows:
function avgArray(arr) { var sum = 0; for (var i = 0, j = arr.length; i < j; i++) { sum += arr[i]; } return sum / arr.length; }> avgArray([2, 3, 4, 5]) 3.5
But it would be nice to be able to reuse the function that we've already created. Luckily, JavaScript lets you call a function and call it with an arbitrary array of arguments, using the apply()
method of any function object.
> avg.apply(null, [2, 3, 4, 5]) 3.5
The second argument to apply()
is the array to use as arguments; the first will be discussed later on. This emphasizes the fact that functions are objects too.
JavaScript lets you create anonymous functions.
var avg = function() { var sum = 0; for (var i = 0, j = arguments.length; i < j; i++) { sum += arguments[i]; } return sum / arguments.length; }
This is semantically equivalent to the function avg()
form. It's extremely powerful, as it lets you put a full function definition anywhere that you would normally put an expression. This enables all sorts of clever tricks. Here's a way of "hiding" some local variables — like block scope in C:
> var a = 1;> var b = 2;> (function() { var b = 3; a += b; })();> a 4> b 2
JavaScript allows you to call functions recursively. This is particularly useful for dealing with tree structures, such as you get in the browser DOM.
function countChars(elm) { if (elm.nodeType == 3) { // TEXT_NODE return elm.nodeValue.length; } var count = 0; for (var i = 0, child; child = elm.childNodes[i]; i++) { count += countChars(child); } return count; }
This highlights a potential problem with anonymous functions: how do you call them recursively if they don't have a name? JavaScript lets you name anonyous functions for this. You can use named IIFEs (Immediately Invoked Function Expressions) as below:
var charsInBody = (function counter(elm) { if (elm.nodeType == 3) { // TEXT_NODE return elm.nodeValue.length; } var count = 0; for (var i = 0, child; child = elm.childNodes[i]; i++) { count += counter(child); } return count; })(document.body);
The name provided to an anonymous function as above is (or at least should be) only available to the function's own scope. This both allows more optimizations to be done by the engine and a more readable code. It also shows up in the debugger which can save you time.
Note that JavaScript functions are themselves objects and you can add or change properties on them just like on objects we've seen in the Objects section.
Custom objects
In classic Object Oriented Programming, objects are collections of data and methods that operate on that data. JavaScript is a prototype-based language which contains no class statement, such as is found in C++ or Java. (This is sometimes confusing for programmers accustomed to languages with a class statement.) Instead, JavaScript uses functions as classes. Let's consider a person object with first and last name fields. There are two ways in which the name might be displayed: as "first last" or as "last, first". Using the functions and objects that we've discussed previously, here's one way of doing it:
function makePerson(first, last) { return { first: first, last: last }; } function personFullName(person) { return person.first + ' ' + person.last; } function personFullNameReversed(person) { return person.last + ', ' + person.first; }> s = makePerson("Simon", "Willison");> personFullName(s); Simon Willison> personFullNameReversed(s); Willison, Simon
This works, but it's pretty ugly. You end up with dozens of functions in your global namespace. What we really need is a way to attach a function to an object. Since functions are objects, this is easy:
function makePerson(first, last) { return { first: first, last: last, fullName: function() { return this.first + ' ' + this.last; }, fullNameReversed: function() { return this.last + ', ' + this.first; } }; }> s = makePerson("Simon", "Willison")> s.fullName() Simon Willison> s.fullNameReversed() Willison, Simon
There's something here we haven't seen before: the 'this
' keyword. Used inside a function, 'this
' refers to the current object. What that actually means is specified by the way in which you called that function. If you called it using dot notation or bracket notation on an object, that object becomes 'this
'. If dot notation wasn't used for the call, 'this
' refers to the global object. This is a frequent cause of mistakes. For example:
> s = makePerson("Simon", "Willison");> var fullName = s.fullName;> fullName(); undefined undefined
When we call fullName()
, 'this
' is bound to the global object. Since there are no global variables called first
or last
we get undefined
for each one.
We can take advantage of the 'this
' keyword to improve our makePerson
function:
function Person(first, last) { this.first = first; this.last = last; this.fullName = function() { return this.first + ' ' + this.last; }; this.fullNameReversed = function() { return this.last + ', ' + this.first; }; } var s = new Person("Simon", "Willison");
We've introduced another keyword: 'new
'. new
is strongly related to 'this
'. What it does is it creates a brand new empty object, and then calls the function specified, with 'this
' set to that new object. Functions that are designed to be called by 'new
' are called constructor functions. Common practise is to capitalise these functions as a reminder to call them with new
.
Our person objects are getting better, but there are still some ugly edges to them. Every time we create a person object we are creating two brand new function objects within it — wouldn't it be better if this code was shared?
function personFullName() { return this.first + ' ' + this.last; } function personFullNameReversed() { return this.last + ', ' + this.first; } function Person(first, last) { this.first = first; this.last = last; this.fullName = personFullName; this.fullNameReversed = personFullNameReversed; }
That's better: we are creating the method functions only once, and assigning references to them inside the constructor. Can we do any better than that? The answer is yes:
function Person(first, last) { this.first = first; this.last = last; } Person.prototype.fullName = function() { return this.first + ' ' + this.last; }; Person.prototype.fullNameReversed = function() { return this.last + ', ' + this.first; };
Person.prototype
is an object shared by all instances of Person
. It forms part of a lookup chain (that has a special name, "prototype chain"): any time you attempt to access a property of Person
that isn't set, JavaScript will check Person.prototype
to see if that property exists there instead. As a result, anything assigned to Person.prototype
becomes available to all instances of that constructor via the this
object.
This is an incredibly powerful tool. JavaScript lets you modify something's prototype at any time in your program, which means you can add extra methods to existing objects at runtime:
> s = new Person("Simon", "Willison");> s.firstNameCaps(); TypeError on line 1: s.firstNameCaps is not a function> Person.prototype.firstNameCaps = function() { return this.first.toUpperCase() };> s.firstNameCaps() SIMON
Interestingly, you can also add things to the prototype of built-in JavaScript objects. Let's add a method to String
that returns that string in reverse:
> var s = "Simon";> s.reversed() TypeError on line 1: s.reversed is not a function> String.prototype.reversed = function() { var r = ""; for (var i = this.length - 1; i >= 0; i--) { r += this[i]; } return r; };> s.reversed() nomiS
Our new method even works on string literals!
> "This can now be reversed".reversed() desrever eb won nac sihT
As I mentioned before, the prototype forms part of a chain. The root of that chain is Object.prototype
, whose methods include toString()
— it is this method that is called when you try to represent an object as a string. This is useful for debugging our Person
objects:
> var s = new Person("Simon", "Willison");> s [object Object]> Person.prototype.toString = function() { return '<Person: ' + this.fullName() + '>'; }> s<Person: Simon Willison>
Remember how avg.apply()
had a null first argument? We can revisit that now. The first argument to apply()
is the object that should be treated as 'this
'. For example, here's a trivial implementation of 'new
':
function trivialNew(constructor) { var o = {}; // Create an object constructor.apply(o, arguments); return o; }
This isn't an exact replica of new
as it doesn't set up the prototype chain. apply()
(it would be difficult to illustrate). This is not something you use very often, but it's useful to know about.
apply()
has a sister function named call
, which again lets you set 'this
' but takes an expanded argument list as opposed to an array.
function lastNameCaps() { return this.last.toUpperCase(); } var s = new Person("Simon", "Willison"); lastNameCaps.call(s); // Is the same as: s.lastNameCaps = lastNameCaps; s.lastNameCaps();
Inner functions
JavaScript function declarations are allowed inside other functions. We've seen this once before, with an earlier makePerson()
function. An important detail of nested functions in JavaScript is that they can access variables in their parent function's scope:
function betterExampleNeeded() { var a = 1; function oneMoreThanA() { return a + 1; } return oneMoreThanA(); }
This provides a great deal of utility in writing more maintainable code. If a function relies on one or two other functions that are not useful to any other part of your code, you can nest those utility functions inside the function that will be called from elsewhere. This keeps the number of functions that are in the global scope down, which is always a good thing.
This is also a great counter to the lure of global variables. When writing complex code it is often tempting to use global variables to share values between multiple functions — which leads to code that is hard to maintain. Nested functions can share variables in their parent, so you can use that mechanism to couple functions together when it makes sense without polluting your global namespace — 'local globals' if you like. This technique should be used with caution, but it's a useful ability to have.
Closures
This leads us to one of the most powerful abstractions that JavaScript has to offer — but also the most potentially confusing. What does this do?
function makeAdder(a) { return function(b) { return a + b; }; } x = makeAdder(5); y = makeAdder(20); x(6) ? y(7) ?
The name of the makeAdder
function should give it away: it creates new 'adder' functions, which when called with one argument add it to the argument that they were created with.
What's happening here is pretty much the same as was happening with the inner functions earlier on: a function defined inside another function has access to the outer function's variables. The only difference here is that the outer function has returned, and hence common sense would seem to dictate that its local variables no longer exist. But they do still exist — otherwise the adder functions would be unable to work. What's more, there are two different "copies" of makeAdder
's local variables — one in which a
is 5 and one in which a
is 20. So the result of those function calls is as follows:
x(6) // returns 11 y(7) // returns 27
Here's what's actually happening. Whenever JavaScript executes a function, a 'scope' object is created to hold the local variables created within that function. It is initialised with any variables passed in as function parameters. This is similar to the global object that all global variables and functions live in, but with a couple of important differences: firstly, a brand new scope object is created every time a function starts executing, and secondly, unlike the global object (which in browsers is accessible as window) these scope objects cannot be directly accessed from your JavaScript code. There is no mechanism for iterating over the properties of the current scope object for example.
So when makeAdder
is called, a scope object is created with one property: a
, which is the argument passed to the makeAdder
function. makeAdder
then returns a newly created function. Normally JavaScript's garbage collector would clean up the scope object created for makeAdder
at this point, but the returned function maintains a reference back to that scope object. As a result, the scope object will not be garbage collected until there are no more references to the function object that makeAdder
returned.
Scope objects form a chain called the scope chain, similar to the prototype chain used by JavaScript's object system.
A closure is the combination of a function and the scope object in which it was created.
Closures let you save state — as such, they can often be used in place of objects.
Memory leaks
An unfortunate side effect of closures is that they make it trivially easy to leak memory in Internet Explorer. JavaScript is a garbage collected language — objects are allocated memory upon their creation and that memory is reclaimed by the browser when no references to an object remain. Objects provided by the host environment are handled by that environment.
Browser hosts need to manage a large number of objects representing the HTML page being presented — the objects of the DOM. It is up to the browser to manage the allocation and recovery of these.
Internet Explorer uses its own garbage collection scheme for this, separate from the mechanism used by JavaScript. It is the interaction between the two that can cause memory leaks.
A memory leak in IE occurs any time a circular reference is formed between a JavaScript object and a native object. Consider the following:
function leakMemory() { var el = document.getElementById('el'); var o = { 'el': el }; el.o = o; }
The circular reference formed above creates a memory leak; IE will not free the memory used by el
and o
until the browser is completely restarted.
The above case is likely to go unnoticed; memory leaks only become a real concern in long running applications or applications that leak large amounts of memory due to large data structures or leak patterns within loops.
Leaks are rarely this obvious — often the leaked data structure can have many layers of references, obscuring the circular reference.
Closures make it easy to create a memory leak without meaning to. Consider this:
function addHandler() { var el = document.getElementById('el'); el.onclick = function() { this.style.backgroundColor = 'red'; }; }
The above code sets up the element to turn red when it is clicked. It also creates a memory leak. Why? Because the reference to el
is inadvertently caught in the closure created for the anonymous inner function. This creates a circular reference between a JavaScript object (the function) and a native object (el
).
There are a number of workarounds for this problem. The simplest is not to use the el
variable:
function addHandler(){ document.getElementById('el').onclick = function(){ this.style.backgroundColor = 'red'; }; }
Surprisingly, one trick for breaking circular references introduced by a closure is to add another closure:
function addHandler() { var clickHandler = function() { this.style.backgroundColor = 'red'; }; (function() { var el = document.getElementById('el'); el.onclick = clickHandler; })(); }
The inner function is executed straight away, and hides its contents from the closure created with clickHandler
.
Another good trick for avoiding closures is breaking circular references during the window.onunload
event. Many event libraries will do this for you. Note that doing so disables bfcache in Firefox 1.5, so you should not register an unload
listener in Firefox, unless you have other reasons to do so.
Original Document Information
- Author: Simon Willison
- Last Updated Date on MDN: Feb 03, 2014
- Copyright: © 2006 Simon Willison, contributed under the Creative Commons: Attribute-Sharealike 2.0 license.
- More information: For more information about this tutorial (and for links to the original talk's slides), see Simon's Etech weblog post.
Note: Some features have been added to the JavaScript language since this document was written. That said it is still a very relevant resource.
And just like that Grunt and RequireJS are out, it's all about Gulp and Browserify now » { 100PercentJS }
URL:http://www.100percentjs.com/just-like-grunt-gulp-browserify-now/
In a demonstration of how insane the Javascript world is, a revolution happened last week and it looks like Grunt was dethroned as the go-to task -runner. But wait you may say, wasn’t the Node and Grunt revolution just beginning? After all Grunt had just managed to find its way to job descriptions. Apparently we weren’t done revolutionizing.
This is how it happened from my point of view. I got linked to this:
When I saw that first post I knew Addy Osmani is always way out on the front lines so I thought there are still some months of using Grunt before Gulp takes over.
But then when Hage Yaapa, who wrote a great book on Express and who always posts great resources, praised it so much I realized it’s probably awesome.
And finally when Sindre Sorhus, also on the yeoman team, major Javascript innovator and leading NPM contributor, posted a quick tutorial about it, I had to look.
And what I saw was that it is indeed much better, much more intuitive to Node.js devs and simpler to use. And that concluded it for me.
So now instead of Grunt’s harder to understand syntaxes and laborious pre-config, we have this:
var gulp = require('gulp'); var browserify = require('gulp-browserify'); var concat = require('gulp-concat'); var styl = require('gulp-styl'); var refresh = require('gulp-livereload'); var lr = require('tiny-lr'); var server = lr(); gulp.task('scripts', function() { gulp.src(['src/**/*.js']) .pipe(browserify()) .pipe(concat('dest.js')) .pipe(gulp.dest('build')) .pipe(refresh(server)) }) gulp.task('styles', function() { gulp.src(['css/**/*.css']) .pipe(styl({compress : true})) .pipe(gulp.dest('build')) .pipe(refresh(server)) }) gulp.task('lr-server', function() { server.listen(35729, function(err) { if(err) return console.log(err); }); }) gulp.task('default', function() { gulp.run('lr-server', 'scripts', 'styles'); gulp.watch('src/**', function(event) { gulp.run('scripts'); }) gulp.watch('css/**', function(event) { gulp.run('styles'); }) })
Then while taking a look at the link Sindre Sorhus posted, I noticed Browserify. I took a look at that and realized it gives you the Node.js module dependency on the client which essentially eliminates the need for RequireJS or any of its analogues.
I love it. So much better. And just like that I’m a convert. I’m always using Gulp and Browserify… Until next week!
Who’s Writing Linux? - IEEE Spectrum
California Cracks Down on Hacker Boot Camps | Wired Enterprise | Wired.com
Comments:"California Cracks Down on Hacker Boot Camps | Wired Enterprise | Wired.com"
URL:http://www.wired.com/wiredenterprise/2014/01/california-hacker-bootcamps
Hacker boot camps have sprung up across the world in recent years, offering crash courses in the art and science of computer programming. These schools are particularly prevalent in the San Francisco Bay Area, the heart of the tech world. And in a place where demand for coders just keeps going up, the schools are very popular.
But now these Silicon Valley schools have a problem.
Over the past month, California regulators sent cease and desist letters to many of these hacker boot camps, saying they run afoul of the state’s educational laws, as first reported by Venturebeat. “They’re not properly licensed, and the law requires them to be licensed to offer an educational service like they are,” says Russ Heimerich, a spokesperson for the the California Bureau for Private Postsecondary Education, or BPPE.
It’s unclear how difficult it will be for these schools to comply with state laws, and though they are likely to stay open in the short term, they may have to change the way they do things. The news is yet another example of the fast-and-loose Silicon Valley ethos running up against the more straight-laced attitudes of government regulators.
Regulators in Hackerland
With names like Dev Bootcamp, Hackbright, and Hack Reactor, these schools are a response to the growing demand for skilled programming across Silicon Valley and beyond. They offer short but intensive courses, usually lasting from nine to 12 weeks, with students often spending 12 to 16 hours a day on projects and course work.
They aren’t your traditional vocational schools. There are no grades, no degrees, and no diplomas. They’re usually staffed by professional coders, not licensed teachers. Many of the teachers are volunteers — even though the schools are usually private companies, not non-profit organizations. And many schools are backed by investments from big-name Silicon Valley venture capital firms.
These regulations were developed for schools that grant degrees and diplomas, and certain requirements may not translate well to boot camps
At least eight of these schools have now received a cease and desist letter — though, according to Bill Liao, co-founder of the hacker boot camp Coder Dojo, who recently met with the leaders of several other hacker boot camps, some boot camps have not receive the letter. Those who did could face steep fines. But Heimerich says the state would rather get the schools into compliance and licensed.
The regulations came as a surprise to many of the schools. “Prior to the letters from the BPPE, we were not aware of them,” says HackBright co-founder David J. Phillips. But it’s not hard to see why the agency would zero in on boot camps, given that many of these companies charge upwards of $10,000 and make bold claims about alumni making six figure salaries upon graduation. Part of the BPPE’s mission is to license private vocational schools and ensure that they aren’t diploma mills or scams.
Last year, Fast Company published an article questioning the business model of these companies and the promises they make. For example, while the schools boast of high placement rates, many students either quit or are kicked out, which inflates placement numbers. The article also criticized some programs for accepting recruitment fees from partner companies that hire alums.
The founders of these boot camps say they’re not opposed to regulation, but they worry that some state laws and regulations may not translate well from traditional schools to hacker boot camps.
“We welcome regulation and oversight,” says Appacademy co-founder Kush Patel. “We have nothing to fear from that process. We think it makes sense.” But he says these regulations were developed for schools that grant degrees and diplomas, and certain requirements may not translate well to boot camps. Because the company is still in discussions with the BPPE, he declines to name any specific regulation that Appacademy might have trouble complying with. “Whatever the case, we’re going to comply with the regulations of the state,” he says.
One concern is the time that it will take the companies to go through the license process, and whether they will be able to continue operating. Heimerich wouldn’t go so far as to say that the agency will turn a blind eye to the schools’ activities during the applications process, but he did say that as long as the schools make a good faith effort to come into compliance, shutting them down will fall low on the agency’s list of priorities.
Striking a Balance
This isn’t the first time hacker boot camps have come under fire from government regulators. Last summer, Toronto-based school Bitmaker Labs temporarily closed during an investigation by Ontario’s Ministry of Training, Colleges, and Universities. The government agency was worried that Bitmaker had violated a 2005 law that prohibits any institution offering “instruction in the skills and knowledge required in order to obtain employment in a prescribed vocation” without approval.
It would be relatively easy for a fly-by-night operation to advertise a developer bootcamp, collect tuition, and then provide low-quality education
At the time, Gray worried that Bitmaker would go out of business before it could complete the application process, which could have taken months. But the agency quickly decided that Bitmaker was exempt from the law thanks to its standards for entry. “When they looked at our program, it sounded more like a professional development program than a vocational program, which gave us an exemption,” Bitmaker co-founder Matt Gray told us last year.
But it’s unlikely that the California boot camps will dodge regulation the same way their Canadian counterpart did. “It doesn’t look like it, based on our reading of the law,” says Heimerich. He says the only exemptions are for courses than cost less than $2,500, programs affiliated with accredited schools, and certain religious education programs.
That may be a good thing. Although the legal loophole was good news for Bitmaker, it didn’t address the broader issue: that it would be relatively easy for a fly-by-night operation to advertise a developer bootcamp, collect tuition, and then provide low-quality education — or skip town before the courses were even set to begin. Even if the current crop of hacker boot camps are perfectly ethical, there’s no guaranteeing the integrity of future entrants into the market.
Given the level of demand — DevBootcamp’s San Francisco program is booked through summer — you can bet more of these schools are on the horizon. Many more.
[RFC 00/16] drm/nouveau: initial support for GK20A (Tegra K1)
Kurikku
Comments:"Kurikku"
Imitate. We are imperfect mirrors. | Derek Sivers
Comments:"Imitate. We are imperfect mirrors. | Derek Sivers"
You know that song you love, that you wish you wrote? Copy it.
You know that existing business, that you wish you had thought of? Copy it.
Why?
Because humans are imperfect mirrors.
Like a funhouse mirror that distorts what it reflects, even if you try to imitate something, it will turn out much different than the original. Maybe better.
When a musician covers someone else's song, they clearly reveal their own warped perspective, since we know what the original looks (sounds) like. Because of this, a cover song is actually a great way to define who you are as an artist.
When a musician writes a new song, imitating someone else's song, it's usually unrecognizable. You have to tell people of its inspiration, for them to make the connection.
So an entrepreneur can imitate someone else's business, and still be adding a unique service to the world.
I resisted this lesson as long as I could. It offended my instincts as an artist. I felt that everything I did had to be 100% original. Everything “not invented here” was out of the question.
Back in my CD Baby days, my only direct competitor had one awesome advantage: old fashioned credit card swiping machines that musicians could use to sell CDs at gigs, even without electricity or a 3G connection. Musicians would tell me how much they loved that service, and even told me they wish we had it too. I said, “Yeah. Damn. Oh well.” Because imitating their offering didn't even enter my mind.
It took a whole year for me to swallow my pride, and realize I'd be doing my clients a favor if I imitated that idea.It turned out to be one of the most successful things I ever did.
Those little credit card swiping machines charged over $8.5 million for thousands of musicians, which meant $7.75 million paid out to the musicians, and $750k profit for us. And the whole thing only took two weeks to make, and one employee to run.
So look around at those existing ideas in the world.
Get over that self-important resistance, and do the world a favor.
© 2014 Derek SiversVPS disk performance, Digital Ocean vs Linode part II | Ian Johnson’s blog
Comments:"VPS disk performance, Digital Ocean vs Linode part II | Ian Johnson’s blog"
URL:http://irj972.co.uk/articles/VPS-disk-perf-cont
02/02/2014
Curiosity drove me to return to disk benchmarks today to run some more tests on Digital Ocean vs Lionde's systems. The numbers didn't fit with my expectations and I wanted to understand why there was a discrepancy, was it just a bad day at Digital Ocean? Time to dig a little deeper.....
I/O IOPings
Digital Ocean
root@irj972:/home/ianj# ioping /tmp -c 10
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=1 time=0.2 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=2 time=0.3 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=3 time=0.2 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=4 time=0.3 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=5 time=0.2 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=6 time=0.3 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=7 time=0.2 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=8 time=0.3 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=9 time=0.3 ms
4096 bytes from /tmp (ext4 /dev/disk/by-label/DOROOT): request=10 time=0.3 ms
Linode
root@localhost:~# ioping /tmp -c 10
4096 bytes from /tmp (ext3 /dev/root): request=1 time=0.2 ms
4096 bytes from /tmp (ext3 /dev/root): request=2 time=0.2 ms
4096 bytes from /tmp (ext3 /dev/root): request=3 time=0.2 ms
4096 bytes from /tmp (ext3 /dev/root): request=4 time=0.1 ms
4096 bytes from /tmp (ext3 /dev/root): request=5 time=0.2 ms
4096 bytes from /tmp (ext3 /dev/root): request=6 time=0.2 ms
4096 bytes from /tmp (ext3 /dev/root): request=7 time=0.2 ms
4096 bytes from /tmp (ext3 /dev/root): request=8 time=0.2 ms
4096 bytes from /tmp (ext3 /dev/root): request=9 time=0.8 ms
4096 bytes from /tmp (ext3 /dev/root): request=10 time=0.2 ms
I/O Seek Test (no cache)
Digital Ocean
root@irj972:/home/ianj# ioping /tmp -RD
--- /tmp (ext4 /dev/disk/by-label/DOROOT) ioping statistics ---
11516 requests completed in 3000.1 ms, 8725 iops, 34.1 mb/s
min/avg/max/mdev = 0.1/0.1/12.0/0.2 ms
Linode
root@localhost:~# ioping /tmp -RD
--- /tmp (ext3 /dev/root) ioping statistics ---
7757 requests completed in 3000.3 ms, 5002 iops, 19.5 mb/s
min/avg/max/mdev = 0.1/0.2/87.1/2.0 ms
I/O Reads (cached)
Digital Ocean
root@irj972:/home/ianj# ioping /tmp -RC
--- /tmp (ext4 /dev/disk/by-label/DOROOT) ioping statistics ---
26603 requests completed in 3000.0 ms, 156733 iops, 612.2 mb/s
min/avg/max/mdev = 0.0/0.0/1.6/0.0 ms
Linode
root@localhost:~# ioping /tmp -RC
--- /tmp (ext3 /dev/root) ioping statistics ---
16200 requests completed in 3000.1 ms, 93285 iops, 364.4 mb/s
min/avg/max/mdev = 0.0/0.0/0.0/0.0 ms
I/O Reads - Sequential
Digital Ocean
root@irj972:/home/ianj# ioping /tmp -RL
--- /tmp (ext4 /dev/disk/by-label/DOROOT) ioping statistics ---
4826 requests completed in 3000.1 ms, 2332 iops, 582.9 mb/s
min/avg/max/mdev = 0.3/0.4/12.6/0.3 ms
Linode
root@localhost:~# ioping /tmp -RL
--- /tmp (ext3 /dev/root) ioping statistics ---
5162 requests completed in 3000.3 ms, 2677 iops, 669.1 mb/s
min/avg/max/mdev = 0.2/0.4/1.5/0.2 ms
FIO - Random read test, queue depth=1.
Digital Ocean
root@irj972:/home/ianj# cat random-read-test.fio
[random-read]
rw=randread
size=128m
directory=/tmp/fio-test/data
root@irj972:/home/ianj# fio random-read-test.fio
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.59
Starting 1 process
random-read: Laying out IO file(s) (1 file(s) / 128MB)
Jobs: 1 (f=1): [r] [100.0% done] [13355K/0K /s] [3260 /0 iops] [eta 00m:00s]
random-read: (groupid=0, jobs=1): err= 0: pid=29086
read : io=131072KB, bw=16419KB/s, iops=4104 , runt= 7983msec
clat (usec): min=72 , max=24936 , avg=233.35, stdev=744.50
lat (usec): min=72 , max=24937 , avg=233.68, stdev=744.51
bw (KB/s) : min=12570, max=26088, per=101.13%, avg=16602.93, stdev=4092.67
cpu : usr=4.51%, sys=47.01%, ctx=9367, majf=1, minf=23
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=32768/0/0, short=0/0/0
lat (usec): 100=2.88%, 250=92.12%, 500=2.27%, 750=0.29%, 1000=0.22%
lat (msec): 2=0.38%, 4=0.69%, 10=1.04%, 20=0.10%, 50=0.01%
Run status group 0 (all jobs):
READ: io=131072KB, aggrb=16418KB/s, minb=16812KB/s, maxb=16812KB/s, mint=7983msec, maxt=7983msec
Disk stats (read/write):
vda: ios=32396/5, merge=0/27, ticks=4200/0, in_queue=4164, util=51.99%
Linode
root@localhost:~# cat random-read-test.fio
[random-read]
rw=randread
size=128m
directory=/tmp/fio-test/data
root@localhost:~# fio random-read-test.fio
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.59
Starting 1 process
random-read: Laying out IO file(s) (1 file(s) / 128MB)
Jobs: 1 (f=1): [r] [100.0% done] [32890K/0K /s] [8030 /0 iops] [eta 00m:00s]
random-read: (groupid=0, jobs=1): err= 0: pid=4115
read : io=131072KB, bw=32316KB/s, iops=8078 , runt= 4056msec
clat (usec): min=62 , max=413 , avg=114.31, stdev=17.83
lat (usec): min=63 , max=414 , avg=115.62, stdev=17.81
bw (KB/s) : min=31536, max=32880, per=100.03%, avg=32326.00, stdev=440.52
cpu : usr=0.00%, sys=27.10%, ctx=32770, majf=0, minf=24
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=32768/0/0, short=0/0/0
lat (usec): 100=10.91%, 250=89.06%, 500=0.04%
Run status group 0 (all jobs):
READ: io=131072KB, aggrb=32315KB/s, minb=33091KB/s, maxb=33091KB/s, mint=4056msec, maxt=4056msec
Disk stats (read/write):
xvda: ios=32379/0, merge=0/0, ticks=3344/0, in_queue=3343, util=81.46%
Summary of 1 deep queue testing
FIO - Random read test, queue depth=8.
I performed the same test again, but this time using asynchronous IO subsystems to issue up to 8 requests simultaneously. I wanted to put more pressure on Linodes systems due to the higher latencies associated with rotating disks.
Digital Ocean
[random-read]
rw=randread
size=128m
directory=/tmp
ioengine=libaio
iodepth=8
direct=1
invalidate=1
root@irj972:/home/ianj# fio rand-read-test-aio.fio
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=8
fio 1.59
Starting 1 process
Jobs: 1 (f=1): [r] [-.-% done] [64877K/0K /s] [15.9K/0 iops] [eta 00m:00s]
random-read: (groupid=0, jobs=1): err= 0: pid=3288
read : io=131072KB, bw=62386KB/s, iops=15596 , runt= 2101msec
slat (usec): min=6 , max=4902 , avg=32.49, stdev=52.94
clat (usec): min=80 , max=5270 , avg=476.55, stdev=226.08
lat (usec): min=186 , max=5291 , avg=510.26, stdev=209.81
bw (KB/s) : min=60408, max=63512, per=100.20%, avg=62512.00, stdev=1419.21
cpu : usr=9.52%, sys=53.33%, ctx=3319, majf=0, minf=29
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=32768/0/0, short=0/0/0
lat (usec): 100=8.39%, 250=6.87%, 500=39.42%, 750=37.18%, 1000=7.12%
lat (msec): 2=0.99%, 4=0.01%, 10=0.04%
Run status group 0 (all jobs):
READ: io=131072KB, aggrb=62385KB/s, minb=63882KB/s, maxb=63882KB/s, mint=2101msec, maxt=2101msec
Disk stats (read/write):
vda: ios=29769/0, merge=0/0, ticks=7424/0, in_queue=7392, util=78.60%
Linode
root@localhost:~# cat rand-read-test-aio.fio
[random-read]
rw=randread
size=128m
directory=/tmp
ioengine=libaio
iodepth=8
direct=1
invalidate=1
root@localhost:~# fio rand-read-test-aio.fio
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=8
fio 1.59
Starting 1 process
random-read: (groupid=0, jobs=1): err= 0: pid=4196
read : io=131072KB, bw=181792KB/s, iops=45447 , runt= 721msec
slat (usec): min=6 , max=67 , avg= 9.19, stdev= 3.38
clat (usec): min=67 , max=3870 , avg=162.13, stdev=68.84
lat (usec): min=82 , max=3884 , avg=172.48, stdev=68.91
bw (KB/s) : min=177560, max=177560, per=97.67%, avg=177560.00, stdev= 0.00
cpu : usr=23.47%, sys=56.39%, ctx=4750, majf=0, minf=29
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w/d: total=32768/0/0, short=0/0/0
lat (usec): 100=9.11%, 250=83.39%, 500=7.45%, 750=0.01%, 1000=0.01%
lat (msec): 2=0.02%, 4=0.03%
Run status group 0 (all jobs):
READ: io=131072KB, aggrb=181791KB/s, minb=186154KB/s, maxb=186154KB/s, mint=721msec, maxt=721msec
Disk stats (read/write):
xvda: ios=29033/0, merge=0/0, ticks=4487/0, in_queue=4467, util=84.79%
Summary of 8 deep queue testing
I also tested with 32 deep IO queues to see how the throughput would change. Digital Ocean stalled at around 80MB/s at 60% CPU where Linodes system rose to 240MB/s by which point it was reading 99% CPU utilisation - thats 300% Digital Oceans disk bandwidth!
Summary
Its apparent that Linodes disk systems beats Digital Oceans in just about every area and not by a small margin. The results from multiple tests across both serial and random nature operations show the results clearly. Closer analysis of the data via 'fio' highlights that Digital Oceans systems are struggling to deal with all disk operations in a timely manner, the range of timings between the quickest and slowest operations is killing the overall bandwidth. Linode on the other hand serve all requests quickly and as such hold together better under strain of these tests. What exactly causes the large deviation I can only guess but my suspicions are its server loading possibly as a result of over selling to enable the low prices. I don't know if this is right but if it is, it will be interesting to see how Digital Oceans performance fares in the future as they are definitely the hot new kid on the block, have a great marketing program and are hoovering up lots of new accounts on a daily basis. As always, it pays to look beyond marketing, Digital Oceans claim of "Blazing Fast 20GB SSD" might not be entirely misleading, especially compared to Amazons performance, but Linode's old fashioned rotating platters have them bested right now.
Super Bowl Wi-Fi password credentials broadcast in pre-game security gaffe | ZDNet
Comments:"Super Bowl Wi-Fi password credentials broadcast in pre-game security gaffe | ZDNet"
During the pre-game coverage for NFL Super Bowl XLVIII, television news inadvertently broadcast the stadium's internal Wi-Fi login credentials, which were in plain sight on an enormous, unmissable, wall-mounted monitor inside a command center.
The Wi-Fi credentials, which have likely been changed as news of the security gaffe has spread like wildfire on Twitter and community blogs, had "marko" as the login, and a pseudo-leet speak variation of 'welcome here' as the password.
The televised segment broadcast this morning was a feature that gave a first-time peek into Super Bowl security headquarters.
It would appear that network security at the MetLife Stadium in East Rutherford, New Jersey, is not up to enterprise levels.
According to Mobile Sports Report, the in-stadium Wi-Fi network at MetLife Stadium, built by Verizon, is free and open to customers of all carriers.
The credentials accidentally broadcast on TV may likely be an internal set of Wi-Fi access credentials, possibly for staff, press or ticketing systems.
While it's good to see the stadium's credentials were not 'admin' and 'password' the security failure will no doubt become yet another example of what not to do.
This year's Super Bowl match, where the Seattle Seahawks face off to the Denver Broncos, is expecting over 82,000 Wi-Fi-enabled guests this year.
Last year Ars Technica reported that along with a "no outside food" policy, attendees were disallowed from bringing wireless equipment that might interfere with the New Orleans Mercedes-Benz Superdome's Wi-Fi network.
"The NFL has a very robust frequency coordination solution in place," Dave Stewart, director of IT and production for Superdome management firm SMG, told me in a phone interview. "Every device that enters the building has to go through a frequency scan and be authorized to enter. At the perimeter the devices are identified and tagged. If they present a potential for interference, they are remediated at that moment. Either the channel is changed or it is denied access. It's all stopped at the perimeter for this event." In Stewart's words, the goal is to prevent any "rogue access points or rogue equipment from attempting to operate in the same frequency" as the stadium Wi-Fi network ("rogue" as in "not under the control of the system administrators").The system went down during the game due to a "relay failure."
Hopefully after today's gaffe, Super Bowl security standards within the organization will be encouraged to meet the level of those as robust as those exercised on fans.
WhatsNew - Mercurial
Comments:"WhatsNew - Mercurial"
URL:http://mercurial.selenic.com/wiki/WhatsNew#Mercurial_2.9_.282014-2-1.29
Features and bugfixes in our latest releases. Please see the Download page for links to source and binaries.
Note that Mercurial follows a time-based release plan with major releases every three months and minor (bugfix) releases on the first of every month (see TimeBasedReleasePlan).
Be sure to read the upgrade notes when upgrading.
(See archive for older versions
1. Mercurial 2.9 (2014-02-01)
This is a regularly-scheduled feature release.
- aliases: make "_checkshellalias()" invoke "findcmd()" with "strict=True"
- backout: add a message after backout that need manual commit
- backout: avoid update on simple case
- bash_completion: add completion for deleting a shelve
- bash_completion: add global support for -B|--bookmark
- bash_completion: add global support for -b|--branch
- bisect: --command without --noupdate should flag the parent rev it tested
bookmarks: allow push -B to create a new remote head (issue2372)
- branchmap: cache open/closed branch head information
- cat: increase perf when catting single files
- changectx: increase perf of walk function
clone: do not turn hidden changeset public on publishing clone (issue3935)
convert: use branchmap to change default branch in destination (issue3469)
date: allow %z in format (issue4040)
- diff: search beyond ancestor when detecting renames
- hgweb: infinite scroll support for coal, gitweb, and monoblue styles
- merge: consider successor changesets for a bare update
- patch: add support for git delta hunks
phase: properly compute ancestors of --rev on push (issue3786)
rebase: abort cleanly when we encounter a damaged rebasestate (issue4155)
rebase: do not crash in panic when cwd disapear in the process (issue4121)
record: --user/-u now works with record when ui.username not set (issue3857)
- record: re-enable whitespace-ignoring options
relink: abort earlier when on different devices (issue3916)
- strip: add faster revlog strip computation
- subrepo: check phase of state in each subrepositories before committing
- subrepo: make it possible to update to hidden subrepo revisions
- subsettable: move from repoview to branchmap, the only place it's used
templater: selecting a style with no templates does not crash (issue4140)
- update: consider successor changesets when moving active bookmark
url: added authuri when login information is requested (issue3209)
2. Mercurial 2.8.2 (2014-01-01)
This is a regularly-scheduled bugfix release.
- fileset, revset: do not use global parser object for thread safety
hgweb: avoid initialization race (issue3953)
- mpatch: rewrite pointer overflow checks
3. Mercurial 2.8.1 (2013-12-01)
This is a regularly-scheduled bugfix release.
bookmarks: consider successor changesets when moving bookmark (issue4015)
- contrib: don't mention obsolete graphlog extension in mercurial.ini
- contrib: promote strip extension over MQ in sample.hgrc
- contrib: stop mentioning obsolete graphlog extension in sample.hgrc
- convert: fix svn crash when svn.ra.get_log calls back with orig_paths=None
- help: fix backwards bisect help example
- help: use progress instead of mq as in 'hg help config' example
hgk: fix tag list parser (issue4101)
hgweb: ignore non numeric "revcount" parameter values (issue4091)
- histedit: hold wlock and lock while in progress
- largefiles: cache largefiles for update, also without printmessage
- largefiles: don't crash on 'local renamed directory' actions
- merge: move forgets to the beginning of the action list
- minirst: do not interpret a directive as a literal block
- minirst: find admonitions before pruning comments and adding margins
- obsolete: stop doing membership test on list
parse_index2: fix crash on bad argument type (issue4110)
- phase: better error message when --force is needed
rebase: fix rebase aborts when 'tip-1' is public (issue4082)
rebase: fix working copy location after a --collapse (issue4080)
share: fix unshare calling wrong repo.init() method
shelve: fix bad argument interaction with largefiles (issue4111)
- shelve: unshelve using an unfiltered repository
strip: fix last unprotected mq reference (issue4097)
- strip: hold wlock for entire duration
- subrepo: sanitize non-hg subrepos
templater: fix escaping in nested string literals (issue4102)
templater: makes branches work correctly with stringify (issue4108)
templater: only recursively evaluate string literals as templates (issue4103)
- unshelve: add tests for unknown files
unshelve: don't commit unknown files during unshelve (issue4113)
- util: url keeps backslash in paths
- util: warn when adding paths ending with \
4. Mercurial 2.8 (2013-11-01)
This is a regularly scheduled feature release.
4.1. Core features
- hgweb: add revset syntax support to search
- hgweb: always run search when a query is entered (BC)
- hgweb (paper theme): add infinite scrolling to graph
- hgweb: show full date in rfc822 format in tooltips at shortlog page
proxy: allow wildcards in the no proxy list (issue1821)
- pull: for pull --update with failed update, print hint if any
- rebase: preserve working directory parent (BC)
sslutil: add a config knob to support TLS (default) or SSLv23 (BC) (issue4038)
- templatefilters: add short format for age formatting
- templater: support using templates with non-standard names from map file
- update: add error message for dirty non-linear update with no rev
- addremove: don't do full walks
- log: make file log slow path usable on huge repos
- subrepo: let the user choose to merge, keep local or keep remote subrepo revisions
4.2. Extension features
- convert-internals: introduce hg.revs to replace hg.startrev and --rev with a revset
- convert-internals: update source shamap when using filemap, just as when not using filemap
- factotum: clean up keychain for multiple hg repository authentication
- histedit: abort if there are multiple roots in "--outgoing" revisions
mq: extract strip function as its standalone extension (issue3824)
- mq: look for modified subrepos when checking for local changes
- rebase: remove bailifchanged check from pullrebase (BC)
- shelve: add a shelve extension to save/restore working changes
4.3. Fixes
- pager: honour internal aliases
patch: ensure valid git diffs if source/destination file is missing (issue4046)
patch: Fix nullid for binary git diffs (issue4054)
- progress: stop getting stuck in a nested topic during a long inner step
rebase: handle bookmarks matching revset function names (issue3950)
rebase: preserve active bookmark when not at head (issue3813)
rebase: preserve metadata from grafts of changes (issue4001)
rebase: fix selection of base used when rebasing merge (issue4041)
ui: send password prompts to stderr again (issue4056)
5. Mercurial 2.7.2 (2013-10-01)
Regularly scheduled bugfix release. This fixes significant regressions from 2.7 in push/pull performance and SSL negotiation.
bundle: fix performance regression when bundling file changes (issue4031)
- generaldelta: initialize basecache properly
- help: use full name of extensions to look up them for keyword search
- histedit: abort if there are multiple roots in "--outgoing" revisions
- histedit: add more detailed help about "--outgoing"
- histedit: suggest "histedit --abort" for inconsistent histedit state
httpclient: apply upstream revision da7579b034a4 to fix SSL problems (issue4038)
rebase: catch RepoLookupError at restoring rebase state for abort/continue
rebase: catch RepoLookupError at restoring rebase state for summary
- repoview: have unfilteredpropertycache using the underlying cache
- repoview: make propertycache.setcache compatible with repoview
- revset: fix wrong keyword() behaviour for strings with spaces
sslutil: backed out changeset 074bd02352c0 (issue4038)
- strip: set current bookmark to None if stripped
6. Mercurial 2.7.1 (2013-09-03)
Regularly scheduled bugfix release.
rebase: handle bookmarks matching revset function names (issue3950)
tags: write tag overwriting history also into tag cache file (issue3911)
7. Mercurial 2.7 (2013-08-01)
Regularly scheduled feature release. This release contains an important fix for a merge ancestor calculation regression in the 2.6 series.
7.1. Core features
- bookmarks: allow bookmark command to take multiple arguments
commands: add checks for unfinished operations (issue3955)
- commit: enable --secret option
- hgweb: run search instead of showing wrong error for ambigious identifier
import: cut commit messages at --- unconditionally (issue2148)
log: add a log style that is default+phase (issue3436)
- paper: add line wrapping switch to file source view
- paper: code selection without line numbers in file source view
- paper: highlight line which is linked to in source view
revert: make backup when unforgetting a file (issue3423)
- rollback: mark as deprecated
sslutil: force SSLv3 on Python 2.6 and later (issue3905)
- summary: augment output with info from extensions
- templater: add strip function with chars as an extra argument
- log: show style list when unknown style specified
- tip: deprecate the tip command
update: add tracking of interrupted updates (issue3113)
7.2. Extension features
- churn: split email aliases from the right
histedit: refuse to edit history that contains merges (issue3962)
convert: improve error handling when parsing splicemap (issue2084)
convert: support paths with spaces in splicemap (issue3844)
7.3. Fixes
ancestor: Fix a reference counting bug in the C version (issue3984)
- bookmarks: update only proper bookmarks on push -r/-B (issue 3973)
bookmarks: pull --update updates to active bookmark if it moved (issue4007)
- changegroup: fix fastpath during commit
checklink: work around sshfs brain-damage (issue3636)
convert: catch empty origpaths in svn gettags (issue3941)
- convert: fix bad conversion of copies when hg.startrev is specified
convert: handle changeset sorting errors without traceback (issue3961)
hgweb: fix incorrect way to count revisions in log (issue3977)
- histedit: don't clobber working copy on --abort if not on histedit cset
largefiles: overridematch() should replace the file path instead of extending (issue3934)
- progress: respect HGPLAIN
- rebase: allow aborting when descendants detected
rebase: continue abort without strip for immutable csets (issue3997)
rebase: don't clobber wd on --abort when we've updated away (issue4009)
revlog: handle hidden revs in _partialmatch (issue3979)
8. Mercurial 2.6.3 (2013-07-01)
This is a regularly-scheduled bugfix release.
commit: amending with --close-branch (issue3445)
- doc: make it easier to read how to enable extensions
- doc: reword "config file" to "configuration file"
- docs: change description to synopsis in hgrc.5
histedit: raise ImportError when demandloading is enabled
pathencode: fix hashmangle short dir limit (issue3958)
update: remove .hg/graftstate on clean (issue3970)
9. Mercurial 2.6.2 (2013-06-01)
This is a regularly-scheduled bugfix release.
- amend: complain more comprehensibly about subrepos
- blackbox: fix blackbox causing exceptions in tests
blackbox: fix recording exit codes (issue3938)
- dirstate: don't overnormalize for ui.slash
graft: refuse to commit an interrupted graft (issue3667)
- help: fix role/option confusion in RST
- help: stop documentation markup appearing in generated help
10. Mercurial 2.6.1 (2013-05-14)
This is an unscheduled bugfix release to address some minor regressions in the 2.6 release.
convert: fix bug of wrong CVS path parsing without port number (issue3678)
- help/config: note 64-bit Windows registry key used with 32-bit Python
hfs+: rewrite percent-escaper (issue3918)
hgignore: fix regression with hgignore directory matches (issue3921)
- highlight: fix page layout with empty first and last lines
- largefiles: check existence of the file with case awareness of the filesystem
- largefiles: check unknown files with case awareness of the filesystem
- pathencode: grow buffers to increase safety margin
revert: ensure that copies and renames are honored (issue3920)
subrepo: open files in 'rb' mode to read exact data in (issue3926)
- windows: check target type before actual unlinking to follow POSIX semantics
11. Mercurial 2.6 (2013-05-01)
This release has known issues with some ignore rules (issue3921) and subrepos on Windows (issue3926)
This is a regularly scheduled feature release.
11.1. Core features
amend: support amending merge changesets (issue3778)
- archive: raise error.Abort if the file pattern matches no files
- bash_completion: allow remove to complete normal files
- bookmarks: allow (re-)activating a bookmark on the current changeset
- bookmarks: don't allow integers as bookmark/branch/tag names
- bookmarks: moving the active bookmark deactivates it
- bookmarks: resolve divergent bookmarks when moving active bookmark forward
- bookmarks: resolve divergent bookmark when moving across a branch
- commit: show active bookmark in commit editor helper text
- config: discard "%unset" values defined in the other files read in previously
dates: support 'today' and 'yesterday' in parsedate (issue3764)
- date: understand "now" as a shortcut for the current time
- dispatch: print 'abort:' when a pre-command hook fails (BC)
- dispatch: return status is 1 and a nice error message is printed when a user intervention is required (BC)
export: clobber files with -o (BC) (issue3652)
- export: export working directory parent by default
- export: show 'Date' header in a format that also is readable for humans
- filesets: add eol predicate
- hgweb: generate documentation as HTML (previously as text)
- hgweb: teach archive how to download a specific directory or file
- merge: apply non-interactive working dir updates in parallel
- mergetools: avoid losing the merged version with meld
- mergetools: vimdiff issues a warning explaining how to abort
- sslutil: abort if peer certificate is not verified for secure use
summary: make "incoming" information sensitive to branch in URL (issue3830)
summary: make "outgoing" information sensitive to branch in URL (issue3829)
- summary: show active bookmark even if not at current changeset
templatekw: add default styles for hybrid types (issue3887)
- templater: add get() function to access dict element (e.g. extra)
- addremove: improve performance
- ancestor: a new algorithm that is faster for nodes near tip
- dirstate: performance improvements
- grep: use re2 if possible
- parsers: a C implementation of the new ancestors algorithm
- scmutil: rewrite dirs in C, use if available
tags: update tag type only if tag node is updated (issue3911)
11.2. Extension features
- blackbox: new extension
- hgk: add support for phases
- hgk: don't use fixed format for dates
- hgk: update backgroud colour when Ttk is available
- histedit: allow "-" as a command file
histedit: handle multiple spaces between action and hash (issue3893)
- histedit: make "hg histedit" sensitive to branch in URL
- histedit: properly handle --continue on empty fold
histedit: support editing of the first commit (issue3767)
- interhg: feature integrated in core. Extension removed.
- largefiles: don't cache largefiles for pulled heads by default
- largefiles: improve reuse of HTTP connections
- largefiles: introduce lfpull command for pulling missing largefiles
- largefiles: introduce pulled() revset expression for use in --lfrev
- largefiles: introduce pull --lfrev option
largefiles: quiet (and document) undefined name errors (issue3886)
- largefiles: stat all largefiles in one batch before downloading
largefiles: use repo.wwrite for writing standins (issue3909)
mq: comply with filtering when injecting fake tags (issue3812)
mq: do not inherit settings form base repo in mqrepo (Fixes issue2358)
rebase: check no-op before checking phase (issue3891)
- rebase: fix --collapse when a file was added then removed
- smtp: use 465 as default port for SMTPS
- smtp: verify the certificate of the SMTP server for STARTTLS/SMTPS
subrepo: clone of git sub-repository creates incorrect git branch (issue3870)
- subrepo: do not push mercurial subrepos whose store is clean
- subrepo: fix exception on revert when "all" option is omitted
11.3. Fixes
annotate: increase refcount of each revision correctly (issue3841)
applyupdates: assign variable before we try to use it (issue3855)
- bookmarks: allow moving a bookmark forward to a descendant
- bookmarks: fix bug that activated a bookmark even with -r passed
- case collision: Avoid unexpectd case folding issue during merge that should succeed (Bts: issue3452)
- commit: allow closing "non-head" changesets
convert/git: catch errors from modern git-ls-remote (issue3428)
destroyed: invalidate phraserevs cache in all case (issue3858)
- diff: fix binary file removals in git mode
- http: avoid large text dumps when remote url is not a repo
import: don't rollback and unrelated transaction on failed import --exact (issue3616)
log: fix behavior with empty repositories (issue3497)
outgoing: fix possible filtering crash in outgoing (issue3814)
pager: catch ctrl-c on exit (issue3834)
- record: abort on malformed patches instead of crashing
revset: change ancestor to accept 0 or more arguments (issue3750)
revset: don't abort when regex to tag() matches nothing (issue3850)
- scheme: don't crash on invalid URLs
- setup: make error message for missing Python headers more helpful
- sshpeer: store subprocess so it cleans up correctly
- win32: use explicit path to "python.exe" only if it exists
12. Mercurial 2.5.4 (2013-04-04)
This fixes an urgent regression in merging with subrepos introduced in 2.5.
applyupdates: assign variable before we try to use it (issue3855)
- setup.py: properly discard trust warning
13. Mercurial 2.5.3 (2013-04-01)
- hgweb: show correct error message for i18n environment
localrepo: always write the filtered phasecache when nodes are destroyed (issue3827)
- rebase: restore active bookmark after rebase --continue
- setup.py: add metadata to register package to PyPI
- setup.py: ignore warnings from obsolete
- zsh_completion: fix trailing carriage return spoiling tag completion
14. Mercurial 2.5.2 (2013-03-01)
bundle: treat branches created newly on the local correctly (issue3828)
- largefiles: avoid rechecking hashes when avoidable
- largefiles: don't let update leave wrong largefiles in wd if fetch fails
- largefiles: fix off-by-one error on pull --all-largefiles
- largefiles: fix download of largefiles from an empty list of changesets
- largefiles: missing largefiles should not be committed as removed
- mergetools: vimdiff issue a warning explaining how to abort
outgoing: fix possible filtering crash in outgoing (issue3814)
rebase: fix potential infinite loop in complex rename situation (issue3843)
15. Mercurial 2.5.1 (2013-02-08)
This is a non-scheduled bugfix release.
hgk: support the old way of getting the current Ttk theme (issue3808)
hgweb.cgi: fix internal WSGI emulation (issue3804)
hgweb: make 'summary' work with hidden changesets (issue3810)
incoming: fix incoming when a local head is remotely filtered (issue3805)
- largefiles: don't crash when trying to find default dest for url without path
rebase: derive node from target rev (issue3802)
16. Mercurial 2.5 (2013-02-01)
This is a regularly-scheduled feature release.
16.1. Core features
- branchmap: improved performances
bundle: add revset expression to show bundle contents (issue3487)
- dirstate: implement unix statfiles in C
- hgweb: add (Atom) subscribe links to the repository index
- hgweb: add "URL breadcrumbs"
- hgweb: add branches RSS and Atom feeds
hgweb: secret changeset are excluded from html view (3614 )
- serve: use chunked encoding in hgweb responses
- pathencode: implement both basic and hashed encoding in C
- subrepo: append subrepo path to subrepo error messages
- validate: check for spurious incoming filelog entries
- hgweb: allow hgweb's archive to recurse into subrepos
16.2. Changeset Evolution
Major progress toward ChangesetEvolution were done.
- hidden changesets are now properly ignored by all commands
- a global --hidden flag is added to give access to hidden changesets
- rewriting a changeset but not its descendants is now allowed; this leaves unstable changeset behind
we now detect *divergent* changesets. The third and last kind of obsolescence related troubles. divergent() revset is added
a troubled() revset have been added
- branchmap for of *visible* and *served* changeset are now cached on disk. This is a major performance improvements
- performance improvements of most evolution related algorithm
16.3. Extension features
- color: add template label function
- convert: add config option to use the local time zone
convert: add support for converting git submodule (issue3528)
- hgk: use Ttk instead of plain Tk
- inotify: don't fall over just because of a dangling symlink
- largefiles: fix revert removing a largefile from a merge
- largefiles: fix update from a merge with removed files
- largefiles: make log match largefiles in the non-standin location too
- largefiles: make update with backup files in .hglf slightly less broken
- largefiles: rename 'admin' to more descriptive 'lfstoredir
- rebase: performance improvements
- rebase: rebase set with multiple roots are now handled by the --rev option
- record: use patch.diffopts to account for user diffopts
share: always set default path to work with subrepos (issue3518)
- zsh_completion: add completion of branch names
16.4. Fixes
- commands: 'hg bookmark NAME' should work even with ui.strict=True
copies: do not track backward copies, only renames (issue3739)
destroyed: keep the filecache in sync with __dict__ (issue3335, issue3693, issue3743)
- grep: don't search past the end of the searched string
- hgweb: properly returns 404 for unknown revision (instead of 500)
histedit: proper phase conservation (issue3724)
histedit: prevents obsolescence cycle (issue3681)
- hook: disable demandimport before importing hooks
- mq: don't fail when removing a patch without patch file from series file
- mq: fix qpop of working directory parent patch when not at qtip
zeroconf: use port from server instead of picking port from config (issue3746)
- update: update to current bookmark if it moved out from under us (issue3682)
- bookmarks: show active bookmark even if not at working dir
- largefiles: let wirestore._stat return stats as expected by remotestore verify
- largefiles: adapt verify to batched remote statlfile (issue3780)
- largefiles: don't allow corruption to propagate after detection
- largefiles: don't verify largefile hashes on servers when processing statlfile
- largefiles: allow use of urls with #revision
- largefiles: fix commit when using relative paths from subdirectory
- largefiles: fix cat when using relative paths from subdirectory
- histedit: prevent parent guessed via --outgoing from being a revset (issue3770)
- rebase: delete divergent bookmarks on destination (issue3685)
- hgwebdir: use web.prefix when creating url breadcrumbs (issue3790)
- subrepo: allow skipping courtesy phase sync (issue3781)
- merge: .hgsubstate is special as merge destination, not as merge source
- merge: improved handling of symlinks
17. Mercurial 2.4.2 (2013-01-01)
This is a regularly-scheduled bugfix release.
amend: invalidate dirstate in case of failure (issue3670)
- amend: prevent loss of bookmark on failed amend
- bookmarks: fix head selection for merge with two bookmarked heads
- bundlerepo: don't return the peer without bundlerepo from getremotechanges
- dirstate: don't rename branch file if writing it failed
- dirstate: remove obsolete comment from setbranch
- hgweb: avoid generator exhaustion with branches
- hgweb: fix iterator reuse in atom feed generation
hgwebdir: honor web.templates and web.static for static files (issue3734)
- largefiles revert: update lfdirstate with result from first cleanliness check
- largefiles status: update lfdirstate with result from cleanliness check
largefiles: commit directories that only contain largefiles (issue3548)
- largefiles: don't walk through all ignored files
- paper: sanity-check page feed links
scmutil: don't try to match modes on filesystems without modes (issue3740)
zeroconf: use port from server instead of picking port from config (issue3746)
18. Mercurial 2.4.1 (2012-12-03)
This is a regularly-schedule bugfix release.
amend: force editor only if old message is reused (issue3698)
- grep: don't search past the end of the searched string
hooks: be even more forgiven of non-fd descriptors (issue3711)
hooks: delay I/O redirection until we actually run a hook (issue3711)
phases: fix missing "error" module import (issue3707)
rebase: fix pull --rev options clashing with --rebase (issue3619)
subrepo: add argument to "diff()" to pass "ui" of caller side (issue3712) (API)
update: allow update to existing branches with invalid names (issue3710)
- util: make chunkbuffer non-quadratic on Windows
19. Mercurial 2.4 (2012-11-01)
This is a regularly-scheduled feature release.
19.1. Core features
amend: support for ChangesetEvolution if enabled
- bookmarks: deactivate current bookmark if no name is given
- bookmarks: teach the -r option to use revsets
- bookmarks: disallow bookmarks named 'tip', '.', or 'null'
clone: substantial speedup to clone on repo with a lots of heads (issue3378)
- clone: activate bookmark specified with --updaterev
- clone: update to @ bookmark if it exists
log: substantial speedup for untracked files (issue1340)
- revsets: add branchpoint() function
resolve: commit the changes after each item resolve (issue3638)
- subrepo, hghave: use "svn --version --quiet" to determine version number
- subrepo: setting LC_MESSAGES only works if LC_ALL is empty or unset
- templatefilters: add parameterized date method
- templatefilters: add parameterized fill function
templatefilters: avoid traceback caused by bogus date input (issue3344)
- templatekw: add p1rev, p1node, p2rev, p2node keywords
- templatekw: add parent1, parent1node, parent2, parent2node keywords
templater: abort when a template filter raises an exception (issue2987)
- templater: add if/ifeq conditionals
- templater: add sub() function
- templating: make new-style templating features work with command line lists
bookmarks: take ChangesetEvolution into account when updating (issue3561)
speedup various operation related to ChangesetEvolution
add detection of changeset bumped by ChangesetEvolution
19.2. Extension features
- color: add additional changeset.phase label to log.changeset and log.parent
color: enabled color support for export command (issue1507)
- color: support for all grep fields
- contrib: add a commit synthesizer for reproducing scaling problems
- histedit: refuse to edit public changeset
- histedit: replaces patching logic by merges
histedit: support for ChangesetEvolution if enabled
- largefiles: always create the cache and standin directories when cloning
largefiles: distinguish "no remote repo" from "no files to upload" (issue3651)
largefiles: fix a traceback in lfconvert if a largefile is missing (issue3519)
mq: improve qqueue message with patches applied (issue3036)
- mq: update bookmarks during qrefresh
- notify: support revset selection for subscriptions
rebase: support for ChangesetEvolution if enabled
record: checks for valid username before starting recording process (issue3456)
- record: fix display of non-ASCII names in chunk selection
19.3. Fixes
amend: fix incompatibity between logfile and message option (issue3675)
- amend: wrap all commit operations in a single transaction
bookmarks: abort when incompatible options are used (issue3663)
- bookmarks: avoid redundant creation/assignment of "validdests" in "validdest()"
bookmarks: check bookmark format during rename (issue3662)
- bookmarks: when @ bookmark diverges, don't double the @ sign (BC)
bookmark: prevent crashing when a successor is unknown locally (issue3680)
- clone: activate @ bookmark if updating to it
clone: don't %-escape the default destination (issue3145)
clone: make sure to use "@" as bookmark and "default" as branch (issue3677) (BC)
- clone: print bookmark name when clone activates a bookmark
commands: don't infer repo for commands like update (issue2748)
convert: normalize paths in filemaps (issue3612)
dirstate: handle large dates and times with masking (issue2608)
dirstate: handle dangling junctions on windows (issue2579)
filemerge: use util.shellquote when calling merge (issue3581)
hgweb: make the escape filter remove null characters (issue2567)
- http2: make it possible to connect w/o ssl on port 443
icasefs: make case-folding collision detection as deletion aware (issue3648)
- largefiles: don't copy largefiles from working dir to the store while converting
largefiles: respect the rev when reading standins in copytostore() (issue3630)
largefiles: use 'default' instead of 'default-push' when pulling (issue3584)
mq: fix qrefresh case sensitivity (issue3271)
- patchbomb: respect --in-reply-to for all mails if no intro message is sent
- remove: don't return error on directories with tracked files
revset: accept @ in unquoted symbols (issue3686)
scmutil: add mustaudit delegation to filtervfs (issue3673)
subrepo: only do clean update when overwrite is set (issue3276)
subrepo: subrepo isolation, pass baseui when cloning a new subrepo (issue2904)
update: check for missing files with --check (issue3595) (BC)
url: use open and not url.open for local files (issue3624)
verify: fix all doubled-slash sites (issue3665)
- wireproto: fix pushkey hook failure and output on remote http repo
20. Mercurial 2.3.2 (2012-10-01)
amend: preserve phase of amended revision (issue3602)
archival: add "extended-timestamp" extra block for zip archives (issue3600)
hgweb: avoid bad $$ processing in graph (issue3601)
hgweb: fix incorrect graph padding calculation (issue3626)
- largefiles: fix return codes for multiple commands
- largefiles: don't convert dest=None to dest=hg.defaultdest() in clone command
- largefiles: download missing subrepo revs when archiving
largefiles: enable islfilesrepo() prior to a commit (issue3541)
largefiles: handle commit -A properly, after a --large commit (issue3542)
largefiles: preserve exit code from outgoing command (issue3611)
- largefiles: restore caching of largefiles with 'clone -U --all-largefiles'
- largefiles: restore normal 'clone -u' and 'clone -U' functionality
lock: fixed race condition in trylock/testlock (issue3506)
- mergetools.hgrc: set vimdiff to check=changed
strip: fix revset usage (issue3604)
subrepo: encode unicode path names (issue3610)
21. Mercurial 2.3.1 (2012-09-01)
clone: don't fail with --update for non-local clones (issue3578)
commit: normalize filenames when checking explicit files (issue3576)
- fileset: actually implement 'minusset'
- fileset: do not traceback on invalid grep pattern
- fileset: exclude deleted files from matchctx.existing()
- fileset: fix generator vs list bug in fast path
- fileset: matchctx.existing() must consider ignored files
- fileset: matchctx.existing() must consider unknown files
largefiles: adjust localstore to handle batch statlfile requests (issue3583)
- merge: handle case when heads are all bookmarks
- obsolete: import modules within mercurial/ without "from mercurial"
revlog: don't try to partialmatch strings with length > 40
- rollback: write dirstate branch with correct encoding
- store: only one kind of OSError means "nonexistent entry"
- store: sort the results of fncachestore.datafiles()
strip: fix revset usage (issue3604)
- templater: handle a missing value correctly
- verify: do not choke on valid changelog without manifest
- wix: bump MSI based installers to use Python 2.7
22. Mercurial 2.3 (2012-08-01)
This is a regularly-scheduled feature release with numerous improvements and bugfixes.
22.1. Core features
- help: add --keyword (-k) for searching help
- hgweb: side-by-side comparison functionality
- log: support --graph without graphlog extension
- push: accept revset argument for --rev
- merge: bookmarks will no longer automatically merge with unnamed heads or other bookmarks. Instead it picks heads with diverging bookmarks.
introduce ChangesetsObsolescence concept (experimental)
- bookmarks: allow existing remote bookmarks to become heads when pushing
bookmarks: pull new bookmarks from remote by default (backward incompatible change)
- bookmarks: delete divergent bookmarks on merge
- bisect: set HG_NODE when runing a command
- graft: allow -r to specify revisions
graft: implement --log (issue3438)
- graft: remark on empty graft
- hooks: print out more information when loading a python hook fails
identity: show trailing '+' for dirty subrepos (issue2839)
- incoming/outgoing: handle --graph in core
merge: warn about file deleted in one branch and renamed in other (issue3074)
- Mercurial can now identify third-party extensions as sources of tracebacks
- outgoing: accept revset argument for --rev
- performance improvement on branchy repo: incrementaly update branchcache
- performance improvement on huge file tree: add a C function to pack the dirstate
- performance improvement for huge .hgignore: process regex with re2 bindings if available
- revset: add "diff" field to "matching" predicate
- revset: add "converted" predicate to find converted changesets
- revset: add "origin" and "destination" predicates, to get graft, transplant or rebase origins or destinations.
revset: add "extra" predicate to match changesets extra fields (issue2767)
- revset: add pattern matching to "bookmarks/branch/extra/tag/user" predicated
22.2. Extension features
- acl: use of "!" prefix in user or group names
- children: mark extension as deprecated
convert/svn: handle non-local svn destination paths (issue3142)
convert: accept Subversion 'file:///c%3A/svnrepo' syntax on Windows
- fetch: mark extension as deprecated
- graphlog: feature is now into core
- histedit: new extension for interactive history editing
- hg-ssh: add read-only flag
largefiles: add --all-largefiles flag to pull and clone (issue3188)
largefiles: improve performance by batching statlfile requests when pushing a largefiles repo (issue3386)
- largefiles: no longer attempt to clone all largefiles to non-local destinations
largefiles: optimize performance when updating (issue3440)
- largefiles: support revsets for cat, outgoing --large and revert
- mq: introduce qpush/qpop/qgoto --keep-changes
- strip: introduce -B option to remove a bookmark
rebase: allow collapsing branches in place (issue3111)
- rebase: make --dest understand revsets
rebase: drop the infamous --detach option: rebase now behave with --source and --rev as expectable. It may no longer add second parent to rebased changeset (backward incompatible change)
transplant: handle non-empty patches doing nothing (issue2806)
- transplant: manually transplant pullable changesets with --log
22.3. Fixes
bisect: fix O(n**2) behaviour (issue3382)
- bookmarks: fix push of moved bookmark when creating new branch heads
case insensitive file system can no longer be confused by -R on (issue2167)
copies: one fix related to directory rename detection (issue3511)
- convert: check for failed svn import in debugsvnlog and abort cleanly
- convert: ignore svn:executable for subversion targets without exec bit support
convert: keep branch switching merges with ancestors (issue3340)
- convert: make filemap renames consistently override revision renames
debugrevlog: fix a bug with empty repository (issue3537)
- graphlog: don't truncate template value at last \n
- httprepo: ensure Content-Type header exists when pushing data
largefiles: fix a traceback when addremove follows a remove (issue3507)
- largefiles: fix a traceback when archiving a subrepo in a subrepo
largefiles: fix addremove when largefile is missing (issue3227)
- largefiles: fix addremove with -R option
largefiles: fix exception hack for i18n (issue3197)
largefiles: fix path handling for cp/mv (issue3516)
- largefiles: archive -S now store largefiles instead of standins
largefiles: fix hg addremove when already removed largefile exists (issue3364)
merge: do not warn about copy and rename in the same transaction (issue2113)
- mq: add ".hgsubstate" to patch target list only if it is not listed up yet
- mq: create patch file after commit to import diff of ".hgsubstate" at qrefresh
pager: work around bug in python 2.4's subprocess module (issue3533):
revlog: zlib.error are no longer sent to the user (issue3424)
tag: don't allow tagging the null revision (issue1915)
23. Mercurial 2.2.3 (2012-07-01)
This is a regularly-scheduled bugfix release.
amend: disable hooks when creating intermediate commit (issue3501)
- archive: make progress only show files that are actually archived
bookmarks: correctly update current bookmarks on rebase (issue2277)
bugzilla: stop bugs always being marked as fixed in xmlrpc (issue3484)
graft: don't drop the second parent on unsuccessful merge (issue3498)
- hgweb: fixes linebreak location in gitweb filediff.tmpl view
- rebase: improve error message on improper phases
- record: fix display of non-ASCII names
- statichttprepo: don't send Range header when requesting entire file
- strip: update help to state that you can strip public changeset
subrepo/svn: make rev number retrieval compatible with svn 1.5 (issue2968)
subrepo: support Git being named "git.cmd" on Windows (issue3173)
- subrepo: warn user if Git is not version 1.6.0 or higher
- update: fix help regarding update to ancestor
24. Mercurial 2.2.2 (2012-06-01)
This is a regularly-scheduled bugfix release.
addremove: document default similarity behavior (issue3429)
alias: inherit command optionalrepo flag (issue3298)
amend: preserve extra dict (issue3430)
- bisect: save current state before running a command
- bugzilla: fix transport initialization on python 2.4
- build: fix hgrc manpage building with docutils 0.9
bundle: make bundles more portable (issue3441)
changelog: ensure that nodecache is valid (issue3428)
- hg-ssh: exit with 255 instead of -1 on error
- hgweb: fix filediff base calculation
largefiles: fix "hg status dir" missing regular files (issue3421)
largefiles: fix deletion of multiple missing largefiles (issue3329)
largefiles: follow normal codepath for addremove if non-largefiles repo (issue3249)
- largefiles: in putlfile, ensure tempfile's directory exists prior to creation
largefiles: use wlock for lfconvert (issue3444)
localrepo: clear _filecache earlier to really force reloading (issue3462)
- match: make 'match.files()' return list object always
- mq: add --no-backup for qpush/qpop/qgoto
mq: backup local changes in qpop --force (issue3433)
- mq: backup local changes in qpush --force
- mq: qimport need wlock for --push - do that after releasing lock
osutil: handle deletion race with readdir/stat (issue3463)
- pager: check if signal.SIGPIPE exists
pager: preserve Hg's exit code (and fix Windows support) (issue3225)
- pager: remove quiet flag
- paper, monoblue: link correctly to lines in annotate view
- parsers: fix refcount bug on corrupt index
- patch: fix segfault against unified diffs which start line is zero
patch: keep patching after missing copy source (issue3480)
posix: workaround lack of TIOCGWINSZ on Irix (issue3449)
revpair: handle odd ranges (issue3474)
- revset: explicitely tag alias arguments for expansion
- revset: fix infinite alias expansion detection
- revset: fix traceback for bogus revisions in id(rev)
- revset: make matching() preserve input revision order
scmutil: seen.union should be seen.update (issue3476)
- subrepo: do not traceback on .hgsubstate parsing errors
subrepo: ignore blank lines in .hgsubstate (issue3424)
tag: run commit hook when lock is released (issue3344)
templater: handle SyntaxError when parsing ui.logtemplate
- util: fix bad variable use in bytecount introduced by f0f7f3fab315
win32: fix encoding handling for registry strings (issue3467)
25. Mercurial 2.2.1 (2012-05-03)
This is an unscheduled bugfix release to fix a signficant memory leak in hgweb.
- bookmarks: catch the proper exception for missing revisions
help: add reference to template help (issue3413)
- help: added description for the web.collapse setting
largefiles: fix commit of both largefiles and non-largefiles (issue3354)
parsers: fix refcount leak, simplify init of index (issue3417)
26. Mercurial 2.2 (2012-05-01)
This is a regularly-scheduled feature release. The most notable feature is a new safe '--amend' option for commit using our new phases infrastructure. There are also a number of signficant performance improvements for large repositories and improvements for case-folding filesystems. See UpgradeNotes for minor compatibility notes.
26.1. Core features
- commit: add --amend option
- fileset: add "subrepo" fileset symbol
graft: add --dry-run support (issue3362)
- hgweb: add support for branch width and color settings
- hgweb: add block numbers to diff regions and related links
- hgweb: support multi-level repository indexes by enabling descend and collapse
- merge: improve performance with lots of unknown files
- parsers: incrementally parse the revlog index in C
plan9: add support for plan9
- push/pull: improve performance for partial transfers
- push: decompress in larger chunks for better performance on the server
- clone: add server config option to prefer uncompressed clone
- revert: add support for reverting subrepos
- revset: add "matching" keyword
- store: speed up read and write of large fncache files
- ui: optionally quiesce ssl verification warnings on python 2.5
26.2. Extension features
- bugzilla: add xmlrpcemail submission for Bugzilla 3.6 email interface
- bugzilla: allow change comment to mark bugs fixed
- bugzilla: extract optional hours from commit message and update bug time
- bugzilla: modify access interface to include new bug states
- graphlog: add all log options to glog command
- patchbomb: add --body flag to send patches as inline message body text
- record: allow splitting of hunks by manually editing patches
- transplant: permit merge changesets via --parent
26.3. Fixes
alias: fix shell alias documentation (issue3374)
archive: make it work with svn subrepos (issue3308)
branchmap: server should not advertise secret changeset in branchmap (issue3303)
clone: always close source repository (issue2491)
- commit: abort on merge with missing files (BC)
- config: discard UTF-8 BOM if found
convert/bzr: convert all branches (issue3229) (BC)
convert/bzr: expect unicode metadata, encode in UTF-8 (issue3232)
convert/bzr: handle empty bzr repositories (issue3233)
convert/bzr: ignore nested repos when listing branches (issue3254)
convert/svn: do not try converting empty head revisions (issue3347)
- convert/svn: make svn sink work with svn 1.7
- convert: support non-annotated tags in git backend
dirstate: preserve path components case on renames (issue3402)
export: catch exporting empty revsets (issue3353)
icasefs: make case-folding collision detection rename aware (issue3370)
inotify: catch SignalInterrupt during shutdown (issue3351)
journal: use tryread helper to backup files (issue3375)
largefiles: fix cat for largefiles (issue3352)
largefiles: fix status -S reporting of subrepos (issue3231)
- largefiles: hide .hglf/ prefix for largefiles in hgweb
- largefiles: notice dirty large files in a subrepo
- largefiles: only update changed largefiles when transplanting
- largefiles: optimize update speed by only updating changed largefiles
localrepo: add setparents() to adjust dirstate copies (issue3407)
mdiff: fix diff header generation for files with spaces (issue3357)
merge: check for untracked files more precisely (issue3400)
- merge: fix unknown file merge detection for case-folding systems
patch: be more tolerant with "Parent" header (issue3356)
patch: be more tolerant with EOLs in binary diffs (issue2870)
patch: fix patch hunk/metdata synchronization (issue3384)
- phase: when phase cannot be reduced, hint at --force and return 1 (BC)
- posix: disable cygwin's symlink emulation (BC)
posix: ignore execution bit in cygwin (issue3301)
pure/osutil: use Python's msvcrt module (issue3380)
rebase: preserve mq series order, guarded patches (issue2849)
- rebase: skip resolved but emptied revisions
revset: fix O(n**2) behaviour of bisect() (issue3381)
revset: fix adds/modifies/removes and patterns (issue3403)
revset: fix alias substitution recursion (issue3240)
subrepo/svn: abort on commit with missing file (issue3029)
subrepo/svn: fix checked out rev number retrieval (issue2968)
subrepo: fix default implementation of forget() (issue3404)
subrepo: rewrite handling of subrepo state at commit (issue2403)
- templates/filters: extracting the user portion of an email address (BC)
transplant: do not rollback on patching error (issue3379)
- update: fix case-collision with a clean wd and no --clean
- update: make --check abort with dirty subrepos
update: use normal update path with --check (issue2450)
wireprotocol: use visibleheads as reference while unbundling (issue 3303)
ds.dynamical systems - Perfectly centered break of a perfectly aligned pool ball rack - MathOverflow
URL:http://mathoverflow.net/a/156407
This question was cross-posted on Math Stack Exchange. Here is a copy of my answer for it there.
This is it. The perfectly centered billiards break. Behold.
Setup
This break was computed in Mathematica using a numerical differential equations model. Here are a few details of the model:
- All balls are assumed to be perfectly elastic and almost perfectly rigid.
- Each ball has a mass of 1 unit and a radius of 1 unit.
- The cue ball has a initial speed of 10 units/sec.
- The force between two balls is given by the formula $$ F \;=\; \begin{cases}0 & \text{if }d \geq 2, \\ 10^{11}(2-d)^{3/2} & \text{if }d<2,\end{cases} $$ where $d$ is the distance between the centers of the balls. Note that the balls overlap if and only if $d < 2$. The power of $3/2$ was suggested by Yoav Kallus in the comments, because it follows Hertz's theory of non-adhesive elastic contact.
The initial speed of the cue ball is immaterial -- slowing down the cue ball is the same as slowing down time. The force constant $10^{11}$ has no real effect as long as it's large enough, although it does change the speed at which the initial collision takes place.
The Collision
For this model, the entire collision takes place in the first 0.2 milliseconds, and none of the balls overlap by more than 0.025% of their radius during the collision. (These figures are model dependent -- real billiard balls may collide faster or slower than this.)
The following animation shows the forces between the balls during the collision, with the force proportional to the area of each yellow circle. Note that the balls themselves hardly move at all during the collision, although they do accelerate quite a bit.
The Trajectories
The following picture shows the trajectories of the billiard balls after the collision.
After the collision, some of the balls are travelling considerably faster than others. The following table shows the magnitude and direction of the velocity of each ball, where $0^\circ$ indicates straight up.
$$ \begin{array}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \text{ball} & \text{cue} & 1 & 2,3 & 4,6 & 5 & 7,10 & 8,9 & 11,15 & 12,14 & 13 \\ \hline \text{angle} & 0^\circ & 0^\circ & 40.1^\circ & 43.9^\circ & 0^\circ & 82.1^\circ & 161.8^\circ & 150^\circ & 178.2^\circ & 180^\circ \\ \hline \text{speed} & 1.79 & 1.20 & 1.57 & 1.42 & 0.12 & 1.31 & 0.25 & 5.60 & 2.57 & 2.63 \\ \hline \end{array} $$
For comparison, remember that the initial speed of the cue ball was 10 units/sec. Thus, balls 11 and 15 (the back corner balls) shoot out at more than half the speed of the original cue ball, whereas ball 5 slowly rolls upwards at less than 2% of the speed of the original cue ball.
By the way, if you add up the sum of the squares of the speeds of the balls, you get 100, since kinetic energy is conserved.
Linear and Quadratic Responses
The results of this model are dependent on the power of $3/2$ in the force law -- other force laws give other breaks. For example, we could try making the force a linear function of the overlap distance (in analogy with springs and Hooke's law), or we could try making the force proportional to the square of the overlap distance. The results are noticeably different
Stiff Response
Glenn the Udderboat points out that "stiff" balls might be best approximated by a force response involving a higher power of the distance (although this isn't the usual definition of "stiffness"). Unfortunately, the calculation time in Mathematica becomes longer when the power is increased, presumably because it needs to use a smaller time step to be sufficiently accurate.
Here is a simulation involving a reasonably "stiff" force law $$ F \;=\; \begin{cases}0 & \text{if }d \geq 2, \\ 10^{54}(2-d)^{10} & \text{if }d<2.\end{cases} $$
As you can see, the result is very similar to my initial answer on Math Stack Exchange. This seems like good evidence that the behavior discussed in my initial answer is indeed the limiting behavior in the case where this notion of "stiffness" goes to infinity.
As you might expect, most of the energy in this case is transferred very quickly at the beginning of the collision. Almost all of the energy has moves to the back corner balls in the first 0.02 milliseconds. Here is an animation of the forces:
After that, the corner balls and the cue ball shoot out, and the remaining balls continue to collide gently for the next millisecond or so.
While the simplicity of this behavior is appealing, I would guess that "real" billard balls do not have such a force response. Of the models listed here, the intial Hertz-based model is probably the most accurate. Qualitatively, it certainly seems the closest to an "actual" break.
Note: I have now posted the Mathematica code on my web page.
Chronicle of a death foretold - Opinion - Al Jazeera English
Comments:" Chronicle of a death foretold - Opinion - Al Jazeera English "
Two weeks ago, the Pentagon quietly released a statement that another Guantanamo detainee had died in custody, the ninth since the prison was opened in 2001. Adnan Farhan Abdul Latif, a 32-year-old man from Yemen who had spent eleven years incarcerated, was found dead in his cell on September 8.
The cause of his death has been recorded as unknown and may never truly be known, but Latif had long suffered from feelings of extreme depression during his time in jail, having made several suicide attempts in the previous years.
Latif had long complained of abuse by prison staff and of his deteriorating physical and mental condition during his imprisonment. Two years earlier, he had written that guards "entered my cell on a regular basis. They throw me and drag me on the floor... they strangle me and press hard behind my ears until I lose consciousness". In 2009 he slit his wrists in an attempt to end his life, writing about the incident later to his lawyer to say that his circumstances in Guantanamo "make death more desirable than living".
Latif was initially captured by Pakistani bounty hunters in the aftermath of the 9/11 attacks when a mixture of confusion and desire for vengeance resulted in the effective labelling of any military age Arab males found in Afghanistan and Pakistan as potential terrorists.
He had been receiving medical care in Amman, Jordan for chronic injuries he had received from a car crash in Yemen that had fractured his skull and caused permanent damage to his hearing. Lured to Pakistan by the promise of cheap healthcare, once the war started he ended up caught in the dragnet of opportunistic bounty hunters who detained him, proclaimed him a terrorist and handed him over to the US military in neighbouring Afghanistan.
Later it would come out that such bounty hunters had been unscrupulous, detaining individuals and labelling them as terrorists baselessly in order to collect large cash incentives from the US military for their handover. No evidence was ever found connecting him to terrorism or violent militancy of any kind, and later medical examinations taken of him upon intake into military custody would corroborate his story regarding the nature of the head injuries he had come to Pakistan to treat. Indeed, when he was apprehended he was found not to be in possession of weapons or extremist literature of any kind - what he had with him were copies of his medical records.
While during all his years in custody Latif has never been charged with nor convicted of any crime related to terrorism or any other offence, his death now is made even more tragic due to the fact that he had been recommended for release from Guantanamo by the Department of Defence since as early as 2004, and again in 2007, which said at the time that it had determined that he "is not known to have participated in any combatant/terrorist training". In 2009 a special task force commissioned by the Obama administration also ruled that Latif should be released, a decision which its internal mandates specified could only be reached by the unanimous consensus of all US intelligence agencies. However despite being cleared for release he remained in military custody as a decision had been made not to repatriate any prisoners to Yemen due to ongoing political instability in the country, effectively leaving him and others like him in a state of indefinite detention.
Despite this, Latif fought his own long legal battle through the civilian court system, taking his case all the way to the Supreme Court in order to prove his innocence and win his release. Finally after years of legal challenges in 2010 an order for Latif's immediate release was given by US District Judge Henry Kennedy, who called the allegations against him "unconvincing" and in a 32-page order ruled that the government had failed to provide evidence that Latif had been part of al-Qaeda or any other militant group and ordering it to "take all necessary and appropriate diplomatic steps to facilitate Latif's release forthwith".
Despite this, the Department of Justice successfully appealed the judges' decision, and in a 2-1 ruling that Latif's release order was rescinded; effectively on the grounds that the allegations against him must be taken as accurate if they are claimed to be so by the government. The dissenting opinion lambasted the ruling as rigging "the game in the governments favour", with the ultimate result being to once more snatch away the prospect of freedom from Adnan Latif. Latif had placed his faith in the fairness and impartiality of the US legal system and it failed him utterly, inventing new grounds to keep him incarcerated and in the words of the dissenting judge, "moving the goalposts" in order to ensure that no matter what evidence existed regarding his innocence he would remain behind bars.
Throughout this time, almost a decade of his young life, Adnan Latif remained in Guantanamo Bay. He was interrogated hundreds of times and by his own account suffered frequent physical abuse and degradation at the hands of his captors. In a poem, he described the prison guards who were his warders as "artists of torture, pain, fatigue, insults and humiliation". He joined other prisoners in a hunger strike in protest of their continued imprisonment, being forcibly tied to a special restraining chair and force-fed liquids through his nose twice a day for years. Despite describing the pain of the feeding as being like "having a dagger shoved down your throat", Latif continued to remain on strike and continued his strike up to the time of his death. In the words of his lawyer David Remes, "This is a man who would not accept his situation... He would not accept his mistreatment. He would not go gently into that good night."
As the years dragged on and the prospect of him ever being released began to grow more remote, Latif's mental and physical condition continued to markedly deteriorate. During his incarceration, he wrote an abundance of letters and poetry which offer a window into the utter despair and hopeless into which his life had come; seemingly forever confined to a prison his writings described as "a piece of hell that kills everything, the spirit, the body and kicks away all the symptoms of health from them".
Locked for nearly 10 years in an island prison thousands of miles away from his home, away from his loved ones and from everything which one would find familiar and comforting in life, Latif sank deeper into depression and hopelessness as the futility of the legal efforts towards winning his freedom became clear. In one of his last letters to his lawyer he tells him: "Do whatever you wish to do, the issue [of my defence] is over", and includes with it a message of farewell written both to him personally as well to the world at large: "With all my pains, I say goodbye to you and the cry of death should be enough for you. A world power failed to safeguard peace and human rights and from saving me. I will do whatever I am able to do to rid myself of the imposed death on me at any moment of this prison... the soul that insists to end it all and leave this life which is no longer anymore a life."
Adnan Farhan Abdul Latif died on September 8, 2012. He was described by those who knew him at Guantanamo as a slightly built man with a sensitive demeanour who was tormented by the circumstances of his life and the inescapable nightmare he found himself trapped in.
The booking photograph taken of him by military officials in prison show a young man whose pain is not sublimated but clearly written on his face; a visceral expression of sadness and torment.
He died without ever having been charged with a crime, and while we may never know the exact circumstances of his death, whether he took his own life, whether he died as the result of physical abuse by his captors - as many other detainees are believed to have - or whether his body simply collapsed after years of stress, his attorney offered his own perspective: "He was so fragile, he was so tormented that it would not surprise me if he had committed suicide... However you look at it, it was Guantanamo that killed him."
His own words paint the picture of a man who had lost faith in a society which had treated him with unrelenting malice and cruelty: "I have seen death so many times... Everything is over, life is going to hell in my situation... America, what has happened to you?"
Murtaza Hussain is a Toronto-based writer and analyst focused on issues related to Middle Eastern politics.
Follow him on Twitter: @MazMHussain
1775
The views expressed in this article are the author's own and do not necessarily reflect Al Jazeera's editorial policy.
Is the Super Bowl really a boost to host cities? | SportsonEarth.com : Neil deMause Article
Comments:" Is the Super Bowl really a boost to host cities? | SportsonEarth.com : Neil deMause Article "
URL:http://www.sportsonearth.com/article/66544296/
By Neil deMause
The approaching Super Bowl brings the annual battle between the NFL and economists over whether the game is a boon to host cities. Who's right?
On one side, you have the NFL. Last week, the league, as part of its non-stop hype-a-thon for the First Super Bowl Outdoors In Cold Weather Isn't Snow Just Romantic?, reported that the New York/New Jersey economy would see a $600 million boost as a result of Super Bowl spending. "Thanks to the Super Bowl, we're seeing more hotel rooms booked and restaurant tables reserved and even more excitement than usual for this time of year," U.S. Rep Carolyn Maloney told reporters.
On the other, you have the nation's sports economists, who say the actual number is a fair bit lower. Like, maybe, zero. "There still remains no ex post evidence of an economic impact," says University of South Florida professor Philip Porter, almost audibly sighing over email since, as someone who's been studying this topic for more than a decade, he gets the same question every year at this time. "Super Bowl attendees simply don't buy much that the local economy sells."
So, either more than half a billion dollars, or bupkis. Definitely somewhere in there.
Given that this debate has been going on for eons, you'd think that we'd have reached some resolution by now. And it's an issue that matters tremendously: The reason the NFL puts out its numbers (other than self-aggrandizement and a desire to drive the latest concussion news off the back pages) is because cities spend big money to lure the big game -- not least by sinking hundreds of millions of dollars into the brand-new stadiums that the league says is a condition of hosting -- and the supposed economic payoff makes writing those nine-figure checks go down a bit easier. (In New Jersey's case, at least, the public got off relatively easy, as the Jets and Giants joined to cough up for construction costs with the help of their not entirely happy season ticket holders, though they still got to cash in on a pile of tax and rent breaks.)
We can't directly evaluate the NFL's $600 million impact claim, because the league hasn't revealed how it came up with the number. According to Super Bowl Host Committee p.r. rep Alice McGillion (the former Yankees spokesperson who got to issue angry denials of things like the David Ortiz jersey burial story two days before Yankees officials themselves admitted it), the figures are from a 2010 study by the Super Bowl bid committee that no one's gotten around to releasing in the four years since.
We do have the NFL's reports from past years, though, which show similar numbers. Last year's Super Bowl in New Orleans, for example, was estimated to generate $480 million in local spending, generating $34.9 million in new local tax revenues, according to the NFL host committee's study. Read that study, and we find that to arrive at this figure, researchers simply surveyed Super Bowl attendees, asking them where they were from, whether they'd rented a hotel room, their total food expenses, and so on, then applying a multiplier to account for how fan spending then got re-spent in the local economy. (This multiplier ends up basically doubling the final economic impact number; more on that in a moment.)
You've probably noticed some potential problems here, beyond the dubious quality of survey results derived from asking drunken NFL fans how much they were spending on food. (Median answer: "WOOOOOO NINERS!") First off, a big chunk of spending is on things like Super Bowl tickets and Super Bowl beers and Super Bowl lawn gnomes -- but most of the money for such branded items goes right back out of town once the Super Bowl leaves (except for whatever small sum is paid to the vendors actually selling these items). As Holy Cross professor Victor Matheson, another economist who's studied Super Bowl impact, puts it, "Imagine an airplane landing at an airport and everyone gets out and gives each other a million bucks, then gets back on the plane. That's $200 million in economic activity, but it's not any benefit to the local economy."
Then there's the issue of displacement. When hordes of Super Bowl visitors descend on a city's hotel rooms, that fills up all the hotel rooms, which means -- wait for it -- no more hotel rooms for anyone else. So people who might have visited New Orleans otherwise are forced to steer clear. (The NFL study tried to account for this by subtracting out New Orleans' lost convention business, but as you may be aware, there are other reasons to visit New Orleans in the winter other than for a convention.) In fact, because Super Bowl rooms are often required to be rented by the week but many visitors only show up for the game weekend, some economists have suggested that all those incoming NFL fans only end up displacing people who would have spent more, on average, during their time in town.
So what do non-NFL studies find? Matheson says his research shows an average impact of between $30 million and $120 million in overall spending, which is more than Porter's nothing, but still a whole lot less than $600 million. And as for how much of that actually trickles down to the hosting city, the University of Maryland's Dennis Coates' study of the 2004 Super Bowl found that Houston received about $5 million in added sales tax revenues thanks to having the game in town, a number that would likely be somewhat higher today thanks to inflation. (It would be higher still if not for the fact that the NFL's tax-exempt status allows its employees to avoid paying any local sales taxes for their dinners or hotel stays during Super Bowl week; in New Orleans, this amounted to $800,000 in lost sales tax revenue.)
There are some indications, according to these economists, that a New York Super Bowl could actually work out better than the average. For starters, as Rep. Maloney noted, the freezing weather is actually a plus as far as the economy is concerned, because February isn't peak tourist season in New York, for obvious reasons. Holding the Super Bowl in a more typical locale like Florida may make for toastier fans, but it's a waste from an economic perspective, since you don't need to twist people's arms to go to Florida in the winter. (Plus, then you can't sell Super Bowl Santa hats.)
New York also has more hotel rooms than God, which makes tourist displacement less of a worry, even before accounting for all the Super Bowl visitors who will end up staying on docked cruise ships. "You could reasonably go to New York for something other than the Super Bowl in three weeks," says Matheson. "It would be absolutely insane to do that in Indianapolis."
None of these arguments are new, nor are they likely to change anytime soon. The essential problem with debates over economic impact studies is that they're way too easy to rig -- as a friend of mine who used to work in economic analysis recalls, clients would routinely ask him to come up with a methodology that would justify the desired result, and to dress it up in a clear plastic binder.
So it's not entirely worthless to host a Super Bowl -- or a World Series, or an NCAA championship game, or a World Cup, or any of the other things that sports boosters are forever prescribing as the cure-all for any city's economic woes. Given the findings of Matheson, Coates, Porter, and their colleagues, it's probably not unreasonable to guesstimate perhaps $100 million or so of new money changing hands within the New York area during the first weekend of February, and a few million of that trickling down to the public in the form of new taxes.
That sounds good -- until you realize that the state of New York alone will spend $5 million on advertising for Super Bowl-related events, leaving the only benefit as … advertising New York City as a place that's brutally cold in the winter? With economic strategies like these for NFL cities, the concussionpocalypse might turn out to be a blessing.
* * *
Neil deMause is a Brooklyn-based journalist who has covered sports economics for Slate, the Village Voice, Baseball Prospectus and a bunch of other places you wouldn't remember. He runs the stadium news website Field of Schemes, and co-authored the book of the same name.
Programma 101 - Wikipedia, the free encyclopedia
Comments:"Programma 101 - Wikipedia, the free encyclopedia"
URL:https://en.wikipedia.org/wiki/Programma_101
A Programma 101. |
Personal computer |
1965 |
240 byte |
Programma P102 |
The Programma 101, also known as Perottina, was the first commercial "desktop computer".[1][2] Produced by Italian manufacturer Olivetti, based in Piedmont, and invented by the Italian engineer Pier Giorgio Perotto. It was launched at the 1964 New York World's Fair, volume production started in 1965. A futuristic design for its time, the Programma 101 was priced at $3,200[3] ($23,000 if adjusted to 2011[4]). About 44,000 units were sold, primarily in the US.
It is usually considered a printing programmable calculator or desktop calculator because three years later the Hewlett-Packard 9100A, a model that took inspiration from the P101, was advertised by HP as a "portable calculator", in order to be able to overcome the fears of computers[5] and be able to sell it to corporations without passing through the corporate computer department.[6]
Capabilities[edit]
The Programma 101 was able to calculate the basic four arithmetic functions (addition, subtraction, multiplication, and division), plus square root, absolute value, and fractional part. Also clear, transfer, exchange, and stop for input. There were 16 jump instructions and 16 conditional jump instructions. 32 label statements were available as destinations for the 32 jump instructions and/or the four start keys (V, W, Y, Z).[7]
Each full register held a 22-digit number with sign and decimal point.
Its memory consisted of 10 registers: three for operations (M, A, R); two for storage (B, C); three for storage and/or program (assignable as needed: D, E, F); and two for program only (p1, p2). Five of the registers (B, C, D, E, F) could be subdivided into half-registers, containing an 11-digit number with sign and decimal point. When used for programming, each full register stored 24 instructions.
It printed programs and results onto a roll of paper tape, similar to calculator or cash register paper.
Programming was similar to assembly language, but simpler, as there were fewer options. It directed the exchange between memory registers and calculation registers, and operations in the registers.
The stored programs could be recorded onto plastic cards approximately 10 cm × 20 cm that had a magnetic coating on one side and an area for writing on the other. Each card could be recorded on two stripes, enabling it to store two programs. All ten registers were stored on the card, allowing programs to use up to ten stored 11-digit constants.
The program to calculate logarithms occupied both stripes of one magnetic card.
Construction[edit]
All computation was handled by discrete devices (transistors and diodes mounted on phenolic resin circuit card assemblies), as there were no microprocessors, and integrated circuits were in their infancy. It used an acousticaldelay line memory with metal wires as a data storage device. Magnetostrictiontransducers inside an electromagnet attached to either side of the end of the wire. Data bits entering the magnets caused the transducer to contract or expand (based on binary value) and to twist the end of the wire. The resulting torsionalwave moved down the wire. A piezoelectric transducer converted the bits into an electronic signal that was then amplified and sent back to the beginning with a delay time of 2.2 milliseconds. Typically, many bits would be in transit through the delay, and the computer selected them by counting and comparing to a master clock to find the particular bit it required. Delay line memory was far less expensive and far more reliable per bit than flip-flops made from vacuum tubes, and yet far faster than latching relays.
Design and ergonomy[edit]
Olivetti was famous for its attention to both engineering and design aspects, as the permanent collection at the Museum of Modern Art testify, and the Programma 101 was another example of this attention. Engineering wise, the team worked hard to deliver a very simple product, something that anyone could use. To take care of the ergonomics and aesthetics of a product that didn't exist before, Roberto Olivetti called Mario Bellini, a young Italian architect:
I remember that one day I received a call from Roberto Olivetti: "I want to see you for a complex project I'm building". It involved the design not of a box containing mechanisms and stamped circuits, but a personal object, something that had to live with a person, a person with his chair sitting at a table or desktop and that had to start a relationship of comprehension, of interaction, something quite new because before then computers were as big as a wardrobe. With a wardrobe we don't have any relationship: in fact the most beautiful wardrobes disappear in the wall. But this wasn't a wardrobe or a box, this was a machine designed to be part of your personal entourage. —Mario Bellini, 2011, "Programma 101 — memory of the future", cit.Interaction design and usability[edit]
One of the direct results of the Programma 101 team focus on human-centered objectives was the invention of the programmable magnetic card, a revolutionary item for that time allowing anyone to just insert it and execute any program in a few seconds.[8]
It was a very portable and effective solution: a small magnetic strip with a program memorized in it and a space on the other side to write the description. The program was loaded just by inserting the card at the top, and when the card came out at the bottom, it was aligned perfectly with the V, W, Y, Z keys in a way that the author could have written on the card the labels for these buttons, to make the user aware of their new function.[9]
History[edit]
It was designed by Olivetti engineer Pier Giorgio Perotto in Ivrea. The styling, attributed to Marco Zanuso but in reality by Mario Bellini, was ergonomical and innovative for the time, and earned Bellini the Compasso d'Oro Industrial Design Award.
Developed between 1962 and 1964, it was saved from the sale of the computer division to GE thanks to an employee who one night changed the internal categorization of the product from "computer" to "calculator", leaving the small team in Olivetti and creating some awkward situations in the office, since the building except that office was then owned by GE.[10]
The Programma 101 was launched at the 1964 New York World's Fair, attracting major interest. 40,000 units were sold; 90% of them in the United States where the sale price was $3,200[3] (increasing to about $3,500 in 1968.[7])
Hewlett-Packard was ordered to pay about $900,000 ($6.67 million in present day terms [11]) in royalties to Olivetti after copying some of the solutions adopted in Programma 101, like the magnetic card and the architecture, in the HP 9100.[12][13]
About 10[14] Programma 101 were sold to NASA and used to plan the Apollo 11 landing on the moon.
By Apollo 11 we had a desktop computer, sort of, kind of, called an Olivetti Programma 101. It was a kind of supercalculator. It was probably a foot and a half square, and about maybe eight inches tall. It would add, subtract, multiply, and divide, but it would remember a sequence of these things, and it would record that sequence on a magnetic card, a magnetic strip that was about a foot long and two inches wide. So you could write a sequence, a programming sequence, and load it in there, and the if you would — the Lunar Module high-gain antenna was not very smart, it didn't know where Earth was. [...] We would have to run four separate programs on this Programma 101 [...] —David W. Whittle, 2006 [15]The 101 is mentioned as part of the system used by the US Air Force to compute coordinates for ground directed bombing of B-52 Stratofortress targets during the Vietnam War.[16]
References[edit]
External links[edit]
The Art of Ass-Kicking – Start Smaller
Comments:"The Art of Ass-Kicking – Start Smaller"
URL:http://www.jasonshen.com/2014/start-smaller/
If a product is to succeed at all, it must first succeed on a smaller scale.
Small products do not always succeed, but they are easier and faster to build, test, and tweak than bigger products. This also applies to features. Perhaps John Gall put it best when he said:
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.Gall’s LawMighty Oaks from Little Acorns
We would not have…
- the globally distributed real-time content network that is Twitter 2014
- the “1000 songs in your pocket” that was the iPod
- the marketplace of 500k unique places to stay in 34k cities around the world that is Airbnb
if people didn’t first appreciate…
Only through the small product was the bigger, more powerful and complicated one able to emerge.
Your V1 Is Probably Too Big
Right after we had launched the barebones BurningManRides.com for Ridejoy in August 2011 to help 1600+ Burners share rides to the festival, we planned to go back and “build a simple V1 of the whole product”. That was going to include a payment system. It seemed obvious – how could you build a marketplace for ridesharing if you didn’t have a way to pay?
But when we talked to one of the YC partners, Emmett Shear (I think), we changed our minds.
We’d learn a ton from just launching a more basic ridesharing marketplace, Shear argued, even if we just had messaging and no transactions, than we would from spending an extra month to build a solid payment/transaction infrastructure.
He was right. After launching our V1 in October, it turned out our priorities shifted to growth and the search + profile experience. Payments didn’t come till months later.
Whatever product/feature/service you are trying to build, you can probably build a smaller version of it. Focus on just one feature of the product, or address one use case, or take on one customer type, or do a single pilot project. It will force you to make decisions and really prioritize.
The sooner you get something into the real world, the sooner you’ll get real data and experience on what actually matters.
Start smaller.
Photo Credit: Lotus Carroll via Compfightcc
Related Posts:
How Music Hijacks Our Perception of Time - Issue 9: Time - Nautilus
Comments:"How Music Hijacks Our Perception of Time - Issue 9: Time - Nautilus"
URL:http://nautil.us/issue/9/time/how-music-hijacks-our-perception-of-time
One evening, some 40 years ago, I got lost in time. I was at a performance of Schubert’s String Quintet in C major. During the second movement I had the unnerving feeling that time was literally grinding to a halt. The sensation was powerful, visceral, overwhelming. It was a life-changing moment, or, as it felt at the time, a life-changing eon.
It has been my goal ever since to compose music that usurps the perceived flow of time and commandeers the sense of how time passes. Although I’ve learned to manipulate subjective time, I still stand in awe of Schubert’s unparalleled power. Nearly two centuries ago, the composer anticipated the neurological underpinnings of time perception that science has underscored in the past few decades.
The human brain, we have learned, adjusts and recalibrates temporal perception. Our ability to encode and decode sequential information, to integrate and segregate simultaneous signals, is fundamental to human survival. It allows us to find our place in, and navigate, our physical world. But music also demonstrates that time perception is inherently subjective—and an integral part of our lives. “For the time element in music is single,” wrote Thomas Mann in his novel, TheMagic Mountain. “Into a section of mortal time music pours itself, thereby inexpressibly enhancing and ennobling what it fills.”
We conceive of time as a continuum, but we perceive it in discretized units—or, rather, as discretized units. It has long been held that, just as objective time is dictated by clocks, subjective time (barring external influences) aligns to physiological metronomes. Music creates discrete temporal units but ones that do not typically align with the discrete temporal units in which we measure time. Rather, music embodies (or, rather, is embodied within) a separate, quasi-independent concept of time, able to distort or negate “clock-time.” This other time creates a parallel temporal world in which we are prone to lose ourselves, or at least to lose all semblance of objective time.
In recent years, numerous studies have shown how music hijacks our relationship with everyday time. For instance, more drinks are sold in bars when with slow-tempo music, which seems to make the bar a more enjoyable environment, one in which patrons want to linger—and order another round.1 Similarly, consumers spend 38 percent more time in the grocery store when the background music is slow.2 Familiarity is also a factor. Shoppers perceive longer shopping times when they are familiar with the background music in the store, but actually spend more time shopping when the music is novel.3 Novel music is perceived as more pleasurable, making the time seem to pass quicker, and so shoppers stay in the stores longer than they may imagine.
Perhaps the clearest evidence of musical hijacking is this: In 2004, the Royal Automobile Club Foundation for Motoring deemed Wagner’s Ride of the Valkyrie the most dangerous music to listen to while driving. It is not so much the distraction, but the substitution of the frenzied tempo of the music that challenges drivers’ normal sense of speed—and the objective cue of the speedometer—and causes them to speed.
While music usurps our sensation of time, technology can play a role in altering music’s power to hijack our perception. The advent of audio recording not only changed the way music was disseminated, it changed time perception for generations. Thomas Edison’s cylinder recordings—heard below—held about four minutes of music.
This technological constraint set a standard that dictated the duration of popular music long after that constraint was surpassed.4 In fact, this average duration persists in popular music as the modus operandi today. This, in turn, influenced the way music of longer duration was heard. The implicit effect on the classical music industry was disastrous. To put it bluntly, the subjective perception of the length of time withered the collective attention span for listening to music.5
The Royal Automobile Club deemed Wagner’s Ride of the Valkyrie the most dangerous music to listen to while driving.Neuroscience gives us insights into how music creates an alternate temporal universe. During periods of intense perceptual engagement, such as being enraptured by music, activity in the prefrontal cortex, which generally focuses on introspection, shuts down. The sensory cortex becomes the focal area of processing and the “self-related” cortex essentially switches off. As neuroscientist Ilan Goldberg describes, “the term ‘losing yourself’ receives here a clear neuronal correlate.” Rather than enabling perceptual awareness, the role of the self-related prefrontal cortex is reflective, evaluating the significance of the music to the self. However, during intense moments, when time seems to stop, or rather, not exist at all, a selfless, Zen-like state can occur.6
While the sublime sense of being lost in time is relatively rare, the distortion of perceived time is commonplace and routine. Broadly speaking, the brain processes timespans in two ways, one in which an explicit estimate is made regarding the duration of a particular stimulus—perhaps a sound or an ephemeral image—and the second, involving the implicit timespan between stimuli. These processes involve both memory and attention, which modulate the perception of time passing, depending upon how occupied or stimulated we are. Hence time can “fly” when we are occupied, or seem to stand still when we are waiting for the water in the kettle to boil. Unlike the literal loss of “self” that occurs during intense perceptual engagement, the subjective perception of elongated or compressed time is related to self-referential processing. An object—whether image or sound—moving toward you is perceived as longer in duration than the same object that is not moving, or that is receding from you. A looming or receding object triggers increased activation in the anterior insula and anterior cingulate cortices—areas important for subjective awareness.
The directionality of musical melody and gesture evoke similar percepts of temporal dilation. The goal-oriented nature of music provides a framework in which a sense of motion is transposed to sonic structures, and the sensation of “looming” and “receding” can be simulated independently of relative spatial orientation. The subjectivity of time perception can be grounding and self-affirming—a source of great pleasure, or, conversely, able to create a state of disassociation with one’s self—a state of transcendence.7
Time can “fly” when we are occupied, or seem to stand still when we are waiting for the water in the kettle to boil.Considering the composer’s goal of distorting time perception, musical time is notated with remarkable imprecision and ambiguity. Composers, more often than not, rely upon qualitative rather than quantitative directives to inform performers of intended tempo. And if the vagaries of such terms as Adagietto (somewhat slow), or Lentissimo (slower than slow) are not ambiguous enough, terms such as Allegro ma non troppo (fast, but not too fast), or terms that connote speed through emotion such as Allegro appassionato, Bravura, or Agitato, or terms that confuse complexity with speed, such as Tempo semplice, oblige the performer to imagine temporality from the composer’s perspective through guesswork. My favorite temporal marking is the term Tempo rubato, literally “stolen time,” in which duration is added to one event at the expense of another. Long after German inventor Johann Maelzel patented the metronome in 1815, composers continued to persistently avoid strict measures of time in their scores, instead relying primarily on adjectival description.
While the manipulation of perceptual time is a pervasive aspect of music, particular composers, including Anton Bruckner, famous for his hour-plus symphonies, and Olivier Messiaen, took the warping of subjective time to extremes. Anton Webern even fooled himself, grossly overestimating the duration of his own 7-minute-long Variations for Orchestra as lasting 20 minutes!
But it is Schubert, more than any other composer, who succeeded in radically commandeering temporal perception. Nowhere is this powerful control of time perception more forceful than in the String Quintet. Schubert composed the four-movement work in 1828, during the feverish last two months of his life. (He died at age 31.) In the work, he turns contrasting distortions of perceptual time into musical structure. Following the opening melody in the first Allegro ma non troppo movement, the second Adagio movement seems to move slowly and be far longer than it really is, then hastens and shortens before returning to a perception of long and slow. The Scherzo that follows reverses the pattern, creating the perception of brevity and speed, followed by a section that feels longer and slower, before returning to a percept of short and fast. The conflict of objective and subjective time is so forcefully felt in the work that it ultimately becomes unified in terms of structural organization.
The following is an audio tour through key moments in the work, demonstrating the extraordinary degree to which Schubert masterfully manipulates subjective time.
From the outset, the opening melody stutters against a motionless drone. It is a single note repeated five times, venturing to a lone new tone and settling back from where it came in an eerie aura of aimless meandering and stasis.
This long-short-long rhythmic pattern obsessively permeates the opening section. Each time the motive repeats, the pitch repetition, resistant to change, hesitatingly reaches outward. It is as if the motive is trying to liberate itself from its static state, yet uncommitted to a clear path. This struggle to move occurs against a laboriously slow and unwaveringly steady harmonic progression.
The nearly static opening section of the second movement ends with an exquisite shock by another pitch repetition, but this time with the harmonic underpinning shifting abruptly, as if we are thrust through an open door into a new world.
This is technically referred to as an enharmonic pivot—in which a musical pitch with a specific function is redefined—renamed and re-contextualized. Here, G sharp, the third degree of E major, becomes A flat, the third degree of the distantly related key, F minor.
This change triggers a shift of mode and a new thrust of forward motion, set by activation in the accompaniment. Time hastens with a forward thrust, created by literally reversing the two characteristic attributes of the opening pattern. Instead of the opening long-short-long, the pattern is now short-long-short, and the change of pitch that was attempting, almost in vain, to break free at the end of the pattern, is now front-ended, and catapults the melody into motion while the repeated note, before a stutter, no longer resists motion but rather adds to the forward thrust. If time was gasping for life, it is now suddenly, breathlessly energetic.
The new melody in the middle section aggressively probes the future. Yet while its energetic accompaniment seems faster and shorter than the opening, it is, in fact exactly equal in duration: the same number of measures, same number of beats, even the same rate of harmonic change. The section grinds to a halt, introducing the recapitulation of the opening, this time adding a more ornamented line and counterpoint that seems, with only limited success, to keep the music from returning to temporal stasis.
Schubert effectively tricks us into perceiving a timespan far more expansive than registered by a stopwatch.8 Within this illusion of stretched time, he embeds another illusion, of a considerably faster and shorter section sandwiched between slow and virtually motionless music, when, in fact, by the metronome and the clock, all three sections are of equal length.
The third movement, Scherzo, which follows the slow Adagio, stands in temporal and rhythmic opposition to its predecessor. Here again, there are three sections. A fast opening,
followed by a middle section in stark contrast to the opening, and the recapitulation of the opening.
In opposition to the subjective long-short-long sections of the Adagio, the perceived timespans here are fast-slow-fast (or short-long-short). The energetic outer sections grind to a near halt with the middle section. Here again, not only is perceptual time manipulated, but the temporal warping is the very underpinning of the overall musical structure. Just as the perceived distorted time of the Adagio was a reflection of the long-short-long surface rhythm of the Adagio, so the short-long-short surface rhythm of the Scherzo mirrors the perceived time of the movement.
Even though I attended my first concert of the String Quintet 40 years ago, I am still overcome by its powers. The slow second movement persistently puts me on the verge of a sense of temporal stasis. I feel submerged in a viscous medium—not at all struggling to move forward, but rather fully absorbed in an alternate temporal universe. Although immersed in the music, I am aware of time in the external world. I tend to breathe along with phrases, and find myself coming up short of breath. Remarkably, this sensation is relived each time I listen to the String Quintet. Like all music that feels both part of time and beyond it, the String Quintet seems to connect us to something bigger than ourselves—in the words of Schubert’s friend and collaborator, the poet Franz von Schober, “time’s infinite ocean.”
Jonathan Berger is the Denning Family Provostial Professor in Music at Stanford University, where he teaches composition, music theory, and cognition at the Center for Computer Research in Music and Acoustics. His most recent recording is Jiyeh, a violin concerto.
Footnotes
1. Down, S. J., The effect of tempo of background music on duration of stay and spending in a bar. Thesis, University of Jyväskylä (2009).
2. Milliman, R. E., Using Background Music to Affect the Behavior of Supermarket Shoppers. Journal of Marketing (1982).
3. Yalch, R. F. & Spangenberg, E. The Effects of Music in a Retail Setting on Real and Perceived Shopping Times. Journal of Business Research (2000).
4. The capacity of the first audio CDs (about 73 minutes) was, by all accounts, set by Sony President Norio Ohga’s desire to hold the entirety of Beethoven’s 9th Symphony on a single disc.
5. In fact, despite the ability to fit longer works on 45s and LPs, the average duration of songs continued to decline through the 1950s.
6. Goldberg II, H. M, & Malach R., When the brain loses its self: Prefrontal inactivation during sensorimotor processing. Neuron (2006).
7. Wittmann, M, van Wassenhove, V, Craig, A.D., & Paulus, Martin P., The neural substrates of subjective time dilation. Frontiers of Human Neuroscience (2010).
8. The warping of perceptual time in the String Quintet is even shared by professional musicians, who have played the work many times. This is illustrated by the charts below, created by Jonathan Berger. The first chart shows the length of each movement in nine different recordings of the work. The second chart shows the estimated time of each movement by the performers.
Table 1: Comparison of clock timings of nine commercial recordings of the String Quintet. (The relative brevity of the first movement of performance eight is due to the omission of the repetition of the exposition.)Table 2: Estimates by five professional string players soon after performing the String Quintet.Music files courtesy of the Chamber Music Society of Lincoln Center
Publisher: Boston: Isabella Stewart Gardner Museum