Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Nintendo Defeats and Assumes Control of 'Patent Troll's' Portfolio After Victory - Slashdot


Twitter ♥ Open Source

Very funny, gdb. Ve-ery funny.

$
0
0

Comments:" Very funny, gdb. Ve-ery funny."

URL:http://www.yosefk.com/blog/very-funny-gdb-ve-ery-funny.html


Have you ever opened a core dump with gdb, tried to print a C++ std::vector element, and got the following?

(gdb) p v[0]
You can't do that without a process to debug.

So after seeing this for years, my thoughts traveled along the path of, we could make a process out of the core dump.

No really, there used to be Unices with a program called undump that did just that. All you need to do is take the (say) ELF core dump file and generate an ELF executable file which loads the memory image saved in the core (that's actually the easier, portable part) and initializes registers to the right values (the harder, less portable part). I even wrote a limited version of undump for PowerPC once.

So we expended some effort on it at work.

And then I thought I'd just check a live C++ process (which I normally don't do, for various reasons). Let's print a vector element:

(gdb) p v[0]
Could not find operator[].
(gdb) p v.at(0)
Cannot evaluate function — may be inlined.

Very funny, gdb. "You can't do that without a process to debug". Well, I guess you never did say that I could do that with a process to debug, now did you. Because, sure enough, I can't. Rolling on the floor, laughing. Ahem.

I suggest that we all ditch our evil C arrays and switch to slow-compiling, still-not-boundary-checked, still-not-working-in-debuggers-after-all-these-YEARS std::vector, std::array and any of the other zillion "improvements".

And gdb has these pretty printers which, if installed correctly (not easy with several gcc/STL versions around), can display std::vector - as in all of its 10000 elements, if that's how many elements it has. But they still don't let you print vec[0].member.vec2[5].member2. Sheesh!

P.S. undump could be useful for other things, say a nice sort of obfuscating scripting language compiler - Perl used to use undump for that AFAIK. And undump would in fact let you call functions in core dumps - if said functions could be, um, found by gdb. Still, ouch.

P.P.S. What gdb prints and when depends on things I do not comprehend. I failed to reproduce the reported behavior in full at home. I've seen it for years at work though.

Which Tech Companies Will Protect Your Privacy? | Everything That Matters

$
0
0

Comments:"Which Tech Companies Will Protect Your Privacy? | Everything That Matters"

URL:http://sourcefed.com/which-tech-companies-will-protect-your-privacy/


The Electronic Frontier Foundation has released its annual “Who Has Your Back” report card for online privacy.

When using the internet, you entrust your conversations, status updates, check-ins and inane photos to a bunch of online companies from Google to Facebook. Do these companies comply when the government demands your private information?

In this year’s reports the Foundation examined the policies of major internet companies from ISP’s to email and cloud storage providers. Then the assess whether or not they publicly stand on the side of users when the government attempts to seek access to private data.

The idea is to give the companies incentive for transparency regarding data flows to government agencies. It also is meant to encourage them to take a stand for user privacy when possible. According to Gizmodo, the data on each company was based on six criteria:

Require a warrant for content of communications. In this new category, companies earn recognition if they require the government to obtain a warrant supported by probable cause before they will hand over the content of user communications. This policy ensures that private messages stored by online services like Facebook, Google, and Twitter are treated consistently with the protections of the Fourth Amendment. Tell users about government data requests. To earn a star in this category, Internet companies must promise to tell users when the government seeks their data unless prohibited by law. This gives users a chance to defend themselves against overreaching government demands for their data. Publish transparency reports. We award companies a star in this category if they publish statistics on how often they provide user data to the government. Publish law enforcement guidelines. Companies get a star in this category if they make public policies or guidelines they have explaining how they respond to data demands from the government, such as guides for law enforcement. Fight for users’ privacy rights in courts. To earn recognition in this category, companies must have a public record of resisting overbroad government demands for access to user content in court.1 Fight for users’ privacy in Congress. Internet companies earn a star in this category if they support efforts to modernize electronic privacy laws to defend users in the digital age by joining the Digital Due Process Coalition.

Among the new trends discovered with internet companies today is that more and more companies are apt to inform users if their data has been accessed. Several companies have law enforcement guidelines in their policies and more companies are fighting for users rights on capital hill than ever.

Nation

Which companies were you disappointed by? Surprised by? Let us know in the comments down below!

Do Things: a Clojure Language Crash Course | Clojure for the Brave and True

$
0
0

Comments:" Do Things: a Clojure Language Crash Course | Clojure for the Brave and True "

URL:http://www.braveclojure.com/do-things/


It's time to to learn how to actually do things with Clojure! Hot damn!

While you've undoubtedly heard of Clojure's awesome concurrency support and other stupendous features, Clojure's most salient characteristic is that it is a Lisp. In this chapter, you're going to explore the elements which comprise this Lisp core: syntax, functions, and data. This will provide you with a solid foundation for representing and solving problems in Clojure.

This groundwork will also allow you to write some super important code. In the last section, you'll tie everything together by creating a model of a hobbit and writing a function to hit it in a random spot. Super! Important!

As you go through the chapter, I recommend that you type out the examples in a REPL and run them. Programming in a new language is a skill, and, just like yodeling or synchronized swimming, you have to practice it to learn it. By the way, "Synchronized Swimming for Yodelers for the Brave and True" is due to be published in August of 20never. Check it out!

1. Syntax

Clojure's syntax is simple. Like all Lisps, it employs a uniform structure, a handful of special operators, and a constant supply of parentheses delivered from the parenthesis mines hidden beneath the Massachusetts Institute of Technology, where Lisp was born.

1.1. Forms

All Clojure code is written in a uniform structure. Clojure understands:

Literal representations of data structures like numbers, strings, maps, and vectors Operations

We use the term form to refer to structurally valid code. These literal representations are all valid forms:

1"a string"["a""vector""of""strings"]

Your code will rarely contain free-floating literals, of course, since they don't actually do anything on their own. Instead, you'll use literals in operations. Operations are how you do things. All operations take the form, "opening parthensis, operator, operands, closing parenthesis":

(operatoroperand1operand2...operandn)

Notice that there are no commas. Clojure uses whitespace to separate operands and it treats commas as whitespace. Here are some example operations:

(+ 123); => 6(str "It was the panda ""in the library ""with a dust buster"); => "It was the panda in the library with a dust buster"

To recap, Clojure consists of forms. Forms have a uniform structure. They consist of literals and operations. Operations consist of forms enclosed within parentheses.

For good measure, here's something that is not a form because it doesn't have a closing parenthesis:

Clojure's structural uniformity is probably different from what you're used to. In other languages, different operations might have different structures depending on the operator and the operands. For example, JavaScript employs a smorgasbord of infix notation, dot operators, and parentheses:

1+2+3"It was the panda ".concat("in the library ","with a dust buster")

Clojure's structure is very simple and consistent by comparison. No matter what operator you're using or what kind of data you're operating on, the structure is the same.

One final note: I'll also use the term expression to refer to Clojure forms. Don't get too hung up on the terminology, though.

1.2. Control Flow

Here are some basic control flow operators. Throughout the book you'll encounter more.

1.2.1. if

The general structure of if is:

(if boolean-formthen-formoptional-else-form)

Here's an example:

(if true"abra cadabra""hocus pocus"); => "abra cadabra"

Notice that each branch of the if can only have one form. This is different from most languages. For example, in Ruby you can write:

iftruedoer.do_thing(1)doer.do_thing(2)elseother_doer.do_thing(1)other_doer.do_thing(2)end

To get around this apparent limitation, we have the do operator:

1.2.2. do

do lets you "wrap up" multiple forms. Try the following in your REPL:

(if true(do (println "Success!")"abra cadabra")(do (println "Failure :(")"hocus pocus")); => Success!; => "abra cadabra"

In this case, Success! is printed in the REPL and "abra cadabra" is returned as the value of the entire if expression.

1.2.3. when

The when operator is like a combination of if and do, but with no else form. Here's an example:

(when true(println "Success!")"abra cadabra"); => Success!; => "abra cadabra"

Use when when you want to do multiple things when some condition is true, and you don't want to anything when the condition is false.

That covers the essential control flow operators!

1.3. Naming Things with def

One final thing before we move on to data structures: you use def tobind a name to a value in Clojure:

(def failed-protagonist-names["Larry Potter""Doreen the Explorer""The Incredible Bulk"])

In this case, you're binding the name failed-protagonist-names to a vector containing three strings. Notice that I'm using the term "bind", whereas in other langauges you'd say that you're assigning a value to a variable. For example, in Ruby you might perform multiple assignments to a variable to "build up" its value:

severity=:milderror_message="OH GOD! IT'S A DISASTER! WE'RE "ifseverity==:milderror_message=error_message+"MILDLY INCONVENIENCED!"elsifseverity==:terribleerror_message=error_message+"DOOOOOOOMED!"end

The Clojure equivalent would be:

(def severity:mild)(def error-message"OH GOD! IT'S A DISASTER! WE'RE ")(if (= severity:mild)(def error-message(str error-message"MILDLY INCONVENIENCED!"))(def error-message(str error-message"DOOOOOOOMED!")))

However, this is really bad Clojure. For now, you should treat def as if it's defining constants. But fear not! Over the next few chapters you'll learn how to work with this apparent limitation by coding in the functional style.

2. Data Structures

Clojure comes with a handful of data structures which you'll find yourself using the majority of the time. If you're coming from an object-oriented background, you'll be surprised at how much you can do with the "basic" types presented here.

All of Clojure's data structures are immutable, meaning you can't change them in place. There's no Clojure equivalent for the following Ruby:

failed_protagonist_names=["Larry Potter","Doreen the Explorer","The Incredible Bulk"]failed_protagonist_names[0]="Gary Potter"failed_protagonist_names# => [# "Gary Potter",# "Doreen the Explorer",# "The Incredible Bulk"# ]

You'll learn more about why Clojure was implemented this way, but for now it's fun to just learn how to do things without all that philosophizing. Without further ado:

2.1. nil, true, false, Truthiness, Equality

Clojure has true and false values. nil is used to indicate "no value" in Clojure. You can check if a value is nil with the cleverly named nil? function:

(nil? 1); => false(nil? nil); => true

Both nil and false are used to represent logical falsiness, while all other values are logically truthy. = is the equality operator:

(= 11); => true(= nilnil); => true(= 12); => false

Some other languages require you use different operators when comparing values of different types. For example, you might have to use some kind of special "string equality" operator specially made just for strings. You don't need to anything weird or tedious like that to test for equality when using Clojure's built-in data structures.

2.2. Numbers

Clojure has pretty sophisticated numerical support. I'm not going to spend much time dwelling on the boring technical details (like coercion and contagion), because that will get in the way of doing things. If you're interested in said boring details, check outhttp://clojure.org/data_structures#Data Structures-Numbers. Suffice to say that Clojure will merrily handle pretty much anything you throw at it.

In the mean time, we'll be working with integers and floats. We'll also be working with ratios, which Clojure can represent directly:

2.3. Strings

Here are some string examples:

"Lord Voldemort""\"He who must not be named\"""\"Great cow of Moscow!\" - Hermes Conrad"

Notice that Clojure only allows double quotes to delineate strings.'Lord Voldemort', for example, is not a valid string. Also notice that Clojure doesn't have string interpolation. It only allows concatenation via the str function:

(def name "Chewbacca")(str "\"Uggllglglglglglglglll\" - "name); => "Uggllglglglglglglglll" - Chewbacca

2.4. Maps

Maps are similar to dictionaries or hashes in other languages. They're a way of associating some value with some other value. Here are example map literals:

;; An empty map{};; ":a", ":b", ":c" are keywords and we'll cover them in the next section{:a1:b"boring example":c[]};; Associate "string-key" with the "plus" function{"string-key"+};; Maps can be nested{:name{:first"John":middle"Jacob":last"Jingleheimerschmidt"}}

Notice that map values can be of any type. String, number, map, vector, even function! Clojure don't care!

You can look up values in maps with the get function:

(get {:a0:b1}:b); => 1(get {:a0:b{:c"ho hum"}}:b); => {:c "ho hum"}

get will return nil if it doesn't find your key, but you can give it a default value to return:

(get {:a0:b1}:c); => nil(get {:a0:b1}:c"UNICORNS"); => "UNICORNS"

The get-in function lets you look up values in nested maps:

(get-in{:a0:b{:c"ho hum"}}[:b:c]); => "ho hum"

[:b :c] is a vector, which you'll read about in a minute.

Another way to look up a value in a map is to treat the map like a function, with the key as its argument:

({:name"The Human Coffee Pot"}:name); => "The Human Coffee Pot"

Real Clojurists hardly ever do this though. However, Real Clojuristsdo use keywords to look up values in maps:

2.5. Keywords

Clojure keywords are best understood by the way they're used. They're primarily used as keys in maps, as you can see above. Examples of keywords:

:a:rumplestiltsken:34:_?

Keywords can be used as functions. For example:

;; Look up :a in map(:a{:a1:b2:c3}); => 1;; This is equivalent to:(get {:a1:b2:c3}:a); => 1;; Provide a default value, just like get:(:d{:a1:b2:c3}"FAERIES"); => "FAERIES

I think this is super cool and Real Clojurists do it all the time. You should do it, too!

Besides using map literals, you can use the hash-map function to create a map:

(hash-map :a1:b2); => {:a 1 :b 2}

Clojure also lets you create sorted maps, but I won't be covering that.

2.6. Vectors

A vector is similar to an array in that it's a 0-indexed collection:

;; Here's a vector literal[321];; Here we're returning an element of a vector(get [321]0); => 3;; Another example of getting by index. Notice as well that vector;; elements can be of any type and you can mix types.(get ["a"{:name"Pugsley Winterbottom"}"c"]1); => {:name "Pugsley Winterbottom"}

Notice that we're using the same get function as we use when looking up values in maps. The next chapter explain why we do this.

You can create vectors with the vector function:

(vector "creepy""full""moon"); => ["creepy" "full" "moon"]

Elements get added to the end of a vector:

(conj [123]4); => [1 2 3 4]

2.7. Lists

Lists are similar to vectors in that they're linear collections of values. There are some differences, though. You can't list elements with get:

;; Here's a list - note the preceding single quote'(1234); => (1 2 3 4);; Notice that the REPL prints the list without a quote. This is OK,;; and it'll be explained later.;; Doesn't work for lists(get '(100200300400)0);; This works but has different performance characteristics which we;; don't care about right now.(nth '(100200300400)3); => 400

You can create lists with the list function:

(list 1234); => (1 2 3 4)

Elements get added to the beginning of a list:

(conj '(123)4); => (4 1 2 3)

When should you use a list and when should you use a vector? For now, you're probably best off just using vectors. As you learn more, you'll get a good feel for when to use which.

2.8. Sets

Sets are collections of unique values:

;; Literal notation#{"hannah montanna""miley cyrus"2045};; If you try to add :b to a set which already contains :b,;; the set still only has one :b(conj #{:a:b}:b); => #{:a :b};; You can check whether a value exists in a set(get #{:a:b}:a); => :a(:a#{:a:b}); => :a(get #{:a:b}"hannah montanna"); => nil

You can create sets from existing vectors and lists by using the set function. One unobvious use for this is to check whether an element exists in a collection:

(set [33344]); => #{3 4};; 3 exists in vector(get (set [33344])3); => 3;; but 5 doesn't(get (set [33344])5); => nil

Just as you can create hash maps and sorted maps, you can create hash sets and sorted sets:

(hash-set 11312); => #{1 2 3}(sorted-set :b:a:c); => #{:a :b :c}

Clojure also lets you define how a set is sorted using thesorted-set-by function, but this book doesn't cover that.

2.9. Symbols and Naming

Symbols are identifiers that are normally used to refer to something. Let's look at a def example:

(def failed-movie-titles["Gone With the Moving Air""Swellfellas"])

In this case, def associates the value["Gone With the Moving Air" "Swellfellas"] with the symbolfailed-movie-titles.

You might be thinking, "So what? Every other programming language lets me associate a name with a value. Big whoop!" Lisps, however, allow you to manipulate symbols as data, something we'll see a lot of when we start working with macros. Functions can return symbols and take them as arguments:

;; Identity returns its argument(identity 'test); => test

For now, though, it's OK to think "Big whoop!" and not be very impressed.

2.10. Quoting

You may have noticed the single quote, ', in the examples above. This is called "quoting". You'll learn about this in detail in the chapter "Clojure Alchemy: Reading, Evaluation, and Macros". Here's the quick explanation for now.

Giving Clojure a symbol returns the "object" it refers to:

failed-protagonist-names; => ["Larry Potter" "Doreen the Explorer" "The Incredible Bulk"](first failed-protagonist-names); => "Larry Potter"

Quoting a symbol tells Clojure to use the symbol itself as a data structure, not the object the symbol refers to:

'failed-protagonist-names; => failed-protagonist-names(eval 'failed-protagonist-names); => ["Larry Potter" "Doreen the Explorer" "The Incredible Bulk"](first 'failed-protagonist-names); => Throws exception!(first ['failed-protagonist-names'failed-antagonist-names]); => failed-protagonist-names

You can also quote collections like a lists, maps, and vectors. All symbols within the collection will be unevaluated:

'(failed-protagonist-names01); => (failed-protagonist-names 0 1)(first '(failed-protagonist-names01)); => failed-protagonist-names(second '(failed-protagonist-names01)); => 0

2.11. Simplicity

You may have noticed that this treatment of data structures doesn't include a description of how to create new types or classes. This is because Clojure's emphasis on simplicity encourages you to reach for the built-in, "basic" data structures first.

If you come from an object-oriented background, you might think that this approach is weird and backwards. What you'll find, though, is that your data does not have to be tightly bundled with a class for it to be useful and intelligible. Here's an epigram loved by Clojurists which hints at the Clojure philosophy:

Itisbettertohave100functionsoperateononedatastructurethan10functionson10datastructures.--AlanPeris

You'll learn more about this aspect of Clojure's philosophy in the coming chapters. For now, though, keep an eye out for the ways that you gain code re-use by sticking to basic data structures.

Thus concludes our Clojure data structures primer. Now it's time to dig in to functions and see how these data structures can be used!

3. Functions

One of the reasons people go nuts over Lisps is that they allow you to build programs which behave in complex ways, yet the primary building block — the function — is so simple. This section will initiate you in the beauty and elegance of Lisp functions by explaining:

  • Calling functions
  • How functions differ from macros and special forms
  • Defining functions
  • Anonymous functions
  • Returning functions

You can use (doc functionname) and (source functionname) in the REPL to see the documentation or source code for a function.

3.1. Calling Functions

By now you've seen many examples of function calls:

(+ 1234)(* 1234)(first [1234])

I've already gone over how all Clojure expressions have the same syntax: opening parenthesis, operator, operands, closing parenthesis. "Function call" is just another term for an expression where the operator is a function expression. A function expression is just an expression which returns a function.

It might not be obvious, but this let's you write some pretty interesting code. Here's a function expression which returns the + (addition) function:

;; Return value of "or" is first truthy value, and + is truthy(or + -)

You can use that expression as the operator in another expression:

Here are a couple more valid function calls which return 6:

;; Return value of "and" is first falsey value or last truthy value.;; + is the last truthy value((and (= 11)+)123);; Return value of "first" is the first element in a sequence((first [+ 0])123)

However, these aren't valid function calls:

;; Numbers aren't functions(1234);; Neither are strings("test"123)

If you run these in your REPL you'll get something like

ClassCastExceptionjava.lang.Stringcannotbecasttoclojure.lang.IFnuser/eval728(NO_SOURCE_FILE:1)

You're likely to see this error many times as you continue with Clojure. "x cannot be cast to clojure.lang.IFn" just means that you're trying something as a function when it's not.

Function flexibility doesn't end with the function expression! Syntactically, functions can take any expressions as arguments — including other functions.

Take the map function (not to be confused with the map data structure). map creates a new list by applying a function to each member of a collection:

;; The "inc" function increments a number by 1(inc 1.1); => 2.1(map inc [0123]); => (1 2 3 4)

(Note that map doesn't return a vector even though we supplied a vector as an argument. You'll learn why later. For now, just trust that this is OK and expected.)

Indeed, Clojure's ability to receive functions as arguments allows you to build more powerful abstractions. Those unfamiliar with this kind of programming think of functions as allowing you to generalize operations over data instances. For example, the + function abstracts addition over any specific numbers.

By contrast, Clojure (and all Lisps) allows you to create functions which generalize over processes. map allows you to generalize the process of transforming a collection by applying a function — any function — over any collection.

The last thing that you need know about function calls is that Clojure evaluates all function arguments recursively before passing them to the function. Here's how Clojure would evaluate a function call whose arguments are also function calls:

;; Here's the function call. It kicks off the evaluation process(+ (inc 199)(/ 100(- 72)));; All sub-forms are evaluated before applying the "+" function(+ 200(/ 100(- 72))); evaluated "(inc 199)"(+ 200(/ 1005)); evaluated (- 7 2)(+ 20020); evaluated (/ 100 5)220; final evaluation

3.2. Function Calls, Macro Calls, and Special Forms

In the last section, you learned that function calls are expressions which have a function expression as the operator. There are two other kinds of expressions: macro calls and special forms. You've already seen a couple special forms:

(def failed-movie-titles["Gone With the Moving Air""Swellfellas"])(if (= severity:mild)(def error-message(str error-message"MILDLY INCONVENIENCED!"))(def error-message(str error-message"DOOOOOOOMED!")))

You'll learn everything there is to know about macro calls and special forms in the chapter "Clojure Alchemy: Reading, Evaluation, and Macros". For now, though, the main feature which makes special forms "special" is that they don't always evaluate all of their operands, unlike function calls.

Take if, for example. Its general structure is:

(if boolean-formthen-formoptional-else-form)

Now imagine you had an if statement like this:

(if good-mood(tweetwalking-on-sunshine-lyrics)(tweetmopey-country-song-lyrics))

If Clojure evaluated both tweet function calls, then your followers would end up very confused.

Another feature which separates special forms is that you can't use them as arguments to functions.

In general, special forms implement core Clojure functionality that just can't be implemented with functions. There are only a handful of Clojure special forms, and it's pretty amazing that such a rich language is implemented with such a small set of building blocks.

Macros are similar to special forms in that they evaluate their operands differently from function calls. But this detour has taken long enough; it's time to learn how to define functions!

3.3. Defining Functions

Function definitions are comprised of five main parts:

  • defn
  • A name
  • (Optional) a docstring
  • Parameters
  • The function body

Here's an example of a function definition and calling the function:

(defn too-enthusiastic"Return a cheer that might be a bit too enthusiastic"[name](str "OH. MY. GOD! "name " YOU ARE MOST DEFINITELY LIKE THE BEST ""MAN SLASH WOMAN EVER I LOVE YOU AND WE SHOULD RUN AWAY TO SOMEWHERE"))(too-enthusiastic"Zelda"); => "OH. MY. GOD! Zelda YOU ARE MOST DEFINITELY LIKE THE BEST MAN SLASH WOMAN EVER I LOVE YOU AND WE SHOULD RUN AWAY TO SOMEWHERE"

Let's dive deeper into the docstring, parameters, and function body.

3.3.1. The Docstring

The docstring is really cool. You can view the docstring for a function in the REPL with (doc fn-name), e.g. (doc map). The docstring is also utilized if you use a tool to generate documentation for your code. In the above example, "Return a cheer that might be a bit too enthusiastic" is the docstring.

3.3.2. Parameters

Clojure functions can be defined with zero or more parameters:

(defn no-params[]"I take no parameters!")(defn one-param[x](str "I take one param: "x" It'd better be a string!"))(defn two-params[xy](str "Two parameters! That's nothing! Pah! I will smoosh them ""together to spite you! "xy))

Functions can also be overloaded by arity. This means that a different function body will run depending on the number of arguments passed to a function. Here's how you'd define a multi-arity function:

;; Here's the general form of a multiple-arity function definition.;; Notice that each arity definition is enclosed in parentheses;; and has an argument list(defn multi-arity;; 3-arity arguments and body([first-argsecond-argthird-arg](do-thingsfirst-argsecond-argthird-arg));; 2-arity arguments and body([first-argsecond-arg](do-thingsfirst-argsecond-arg));; 1-arity arguments and body([first-arg](do-thingsfirst-arg)))

Overloading by arity is one way to provide default values for arguments. In this case, "karate" is the default argument for thechop-type param:

(defn x-chop"Describe the kind of chop you're inflicting on someone"([name chop-type](str "I "chop-type" chop "name "! Take that!"))([name](x-chopname "karate")))

If you call x-chop with two arguments, then the function works just as it would if it weren't a multi-arity function:

(x-chop"Kanye West""slap"); => "I slap chop Kanye West! Take that!"

If you call x-chop with only one argument, though, then x-chop will actually call itself with the second argument "karate" supplied:

(x-chop"Kanye East"); => "I karate chop Kanye East! Take that!"

It might seem unusual to define a function in terms of itself like this. If so, great! You're learning a new way to do things!

You can also make each arity do something completely unrelated:

(defn weird-arity([]"Destiny dressed you this morning my friend, and now Fear is trying to pull off your pants. If you give up, if you give in, you're gonna end up naked with Fear just standing there laughing at your dangling unmentionables! - the Tick")([number](inc number)))

But most likely, you don't want to do that.

Clojure also allows you to define variable-arity functions by including a "rest-param", as in "put the rest of these arguments in a list with the following name":

(defn codger-communication[whippersnapper](str "Get off my lawn, "whippersnapper"!!!"))(defn codger[&whippersnappers];; the ampersand indicates the "rest-param"(map codger-communicationwhippersnappers))(codger"Billy""Henry""Anne-Marie""The Incredible Bulk"); =>; ("Get off my lawn, Billy!!!"; "Get off my lawn, Henry!!!"; "Get off my lawn, Anne-Marie!!!"; "Get off my lawn, The Incredible Bulk!!!")

As you can see, when you provide arguments to a variable-arity functions, the arguments get treated as a list.

You can mix rest-params with normal params, but the rest-param has to come last:

(defn favorite-things[name &things](str "Hi, "name ", here are my favorite things: "(clojure.string/join", "things)))(favorite-things"Doreen""gum""shoes""berries"); => "Hi, Doreen, here are my favorite things: gum, shoes, berries"

Finally, Clojure has a more sophisticated way of defining parameters called "destructuring", which deserves its own subsection:

3.3.3. Destructuring

The basic idea behind destructuring is that it lets you concisely bindsymbols to values within a collection. Let's look at a basic example:

;; Return the first element of a collection(defn my-first[[first-thing]]; Notice that first-thing is within a vectorfirst-thing)(my-first["oven""bike""waraxe"]); => "oven"

Here's how you would accomplish the same thing without destructuring:

(defnmy-other-first[collection](firstcollection))(my-other-first["nickel""hair"]);=>"nickel"

As you can see, the my-first associates the symbol first-thing with the first element of the vector that was passed in as an argument. You tell my-first to do this by placing the symbolfirst-thing within a vector.

That vector is like a huge sign held up to Clojure which says, "Hey! This function is going to receive a list or a vector as an argument. Make my life easier by taking apart the argument's structure for me and associating meaningful names with different parts of the argument!"

When destructuring a vector or list, you can name as many elements as you want and also use rest params:

(defn chooser[[first-choicesecond-choice&unimportant-choices]](println (str "Your first choice is: "first-choice))(println (str "Your second choice is: "second-choice))(println (str "We're ignoring the rest of your choices. ""Here they are in case you need to cry over them: "(clojure.string/join", "unimportant-choices))))(chooser["Marmalade", "Handsome Jack", "Pigpen", "Aquaman"]); => ; Your first choice is: Marmalade; Your second choice is: Handsome Jack; We're ignoring the rest of your choices. Here they are in case \; you need to cry over them: Pigpen, Aquaman

You can also destructure maps. In the same way that you tell Clojure to destructure a vector or list by providing a vector as a parameter, you destucture maps by providing a map as a parameter:

(defn announce-treasure-location[{lat:latlng:lng}](println (str "Treasure lat: "lat))(println (str "Treasure lng: "lng)))(announce-treasure-location{:lat28.22:lng81.33}); =>; Treasure lat: 100; Treasure lng: 50

Let's look more at this line:

This is like telling Clojure, "Yo! Clojure! Do me a flava and associate the symbol lat with the value corresponding to the key:lat. Do the same thing with lng and :lng, ok?."

We often want to just take keywords and "break them out" of a map, so there's a shorter syntax for that:

;; Works the same as above.(defn announce-treasure-location[{:keys[latlng]}](println (str "Treasure lat: "lat))(println (str "Treasure lng: "lng)))

You can retain access to the original map argument by using the :as keyword. In the example below, the original map is accessed withtreasure-location:

;; Works the same as above.(defn receive-treasure-location[{:keys[latlng]:astreasure-location}](println (str "Treasure lat: "lat))(println (str "Treasure lng: "lng));; One would assume that this would put in new coordinates for your ship(steer-ship!treasure-location))

In general, you can think of destructuring as instructing Clojure how to associate symbols with values in a list, map, or vector.

Now, on to the part of the function that actually does something: the function body!

3.3.4. Function body

Your function body can contain any forms. Clojure automatically returns the last form evaluated:

(defn illustrative-function[](+ 1304)30"joe")(illustrative-function); => "joe"(defn number-comment[x](if (> x6)"Oh my gosh! What a big number!""That number's OK, I guess"))(number-comment5); => "That number's OK, I guess"(number-comment7); => "Oh my gosh! What a big number!"

3.3.5. All Functions are Created Equal

One final note: in Clojure, there are no privileged functions. + is just a function, - is just a function, inc and map are just functions. They're no better than your functions! So don't let them give you any lip.

More importantly, this fact helps to demonstrate Clojure's underlying simplicity. In a way, Clojure is very dumb. When you make a function call, Clojure just says, "map? Sure, whatever! I'll just apply this and move on." It doesn't care what the function is or where it came from, it treats all functions the same. At its core, Clojure doesn't give two burger flips about addition, multiplication, or mapping. It just cares about applying functions.

As you program in with Clojure more, you'll see that this simplicity is great. You don't have to worry about special rules or syntax for working with functions. They all work the same!

3.4. Anonymous Functions

In Clojure, your functions don't have to have names. In fact, you'll find yourself using anonymous functions all the time. How mysterious!

There are two ways to create anonymous functions. The first is to use the fn form:

;; This looks a lot like defn, doesn't it?(fn [param-list]functionbody);; Example(map (fn [name](str "Hi, "name))["Darth Vader""Mr. Magoo"]); => ("Hi, Darth Vader" "Hi, Mr. Magoo");; Another example((fn [x](* x3))8); => 24

You can treat fn nearly identically to the way you treat defn. The parameter lists and function bodies work exactly the same. You can use argument destructuring, rest-params, and so on.

You could even associate your anonymous function with a name, which shouldn't come as a surprise:

(def my-special-multiplier(fn [x](* x3)))(my-special-multiplier12); => 36

(If it does come as a surprise, then… Surprise!)

There's another, more compact way to create anonymous functions:

;; Whoa this looks weird.#(* %3);; Apply this weird looking thing(#(* %3)8); => 24;; Another example(map #(str "Hi, "%)["Darth Vader""Mr. Magoo"]); => ("Hi, Darth Vader" "Hi, Mr. Magoo")

You can see that it's definitely more compact, but it's probably also confusing. Let's break it down.

This kind of anonymous function looks a lot like a function call, except that it's preceded by a pound sign, #:

;; Function expression(* 83);; Anonymous function#(* %3)

This similarity allows you to more quickly see what will happen when this anonymous function gets applied. "Oh," you can say to yourself, "this is going to multiply its argument by 3".

As you may have guessed by now, the percent sign, %, indicates the argument passed to the function. If your anonymous function takes multiple arguments, you can distinguish them like this: %1, %2,%3, etc. % is equivalent to %1:

(#(str %1" and "%2)"corn bread""butter beans"); => "corn bread and butter beans"

You can also pass a rest param:

(#(identity %&)1"blarg":yip); => (1 "blarg" :yip)

The main difference between this form and fn is that this form can easily become unreadable and is best used for very short functions.

3.5. Returning Functions

Functions can return other functions. The returned functions are closures, which means that they can access all the variables that were in scope when the function was created.

Here's a standard example:

;; inc-by is in scope, so the returned function has access to it even;; when the returned function is used outside inc-maker(defn inc-maker"Create a custom incrementor"[inc-by]#(+ %inc-by))(def inc3(inc-maker3))(inc37); => 10

Woohoo!

4. Pulling It All Together

OK! Let's pull all this together and use our knowledge for a noble purpose: smacking around hobbits!

In order to hit a hobbit, we'll first model its body parts. Each body part will include its relative size to help us determine how likely it is that that part will be hit.

In order to avoid repetition, this hobbit model will only include entries for "left foot", "left ear", etc. Therefore, we'll need a function to fully symmetrize the model.

Finally, we'll create a function which iterates over our body parts and randomly chooses the one hit.

Fun!

4.1. The Shire's Next Top Model

For our hobbit model, we'll eschew such characteristics as "joviality" and "mischievousness" and focus only on the hobbit's tiny body. Here's our hobbit model:

(def asym-hobbit-body-parts[{:name"head":size3}{:name"left-eye":size1}{:name"left-ear":size1}{:name"mouth":size1}{:name"nose":size1}{:name"neck":size2}{:name"left-shoulder":size3}{:name"left-upper-arm":size3}{:name"chest":size10}{:name"back":size10}{:name"left-forearm":size3}{:name"abdomen":size6}{:name"left-kidney":size1}{:name"left-hand":size2}{:name"left-knee":size2}{:name"left-thigh":size4}{:name"left-lower-leg":size3}{:name"left-achilles":size1}{:name"left-foot":size2}])

This is a vector of maps. Each map has the name of the body part and relative size of the body part. Look, I know that only anime characters have eyes 1/3 the size of their head, but just go with it, OK?

Conspicuously missing is the hobbit's right side. Let's fix that:

(defn has-matching-part?[part](re-find #"^left-"(:namepart)))(defn matching-part[part]{:name(clojure.string/replace(:namepart)#"^left-""right-"):size(:sizepart)})(defn symmetrize-body-parts"Expects a seq of maps which have a :name and :size"[asym-body-parts](loop [remaining-asym-partsasym-body-partsfinal-body-parts[]](if (empty?remaining-asym-parts)final-body-parts(let [[part&remaining]remaining-asym-partsfinal-body-parts(conj final-body-partspart)](if (has-matching-part?part)(recurremaining(conj final-body-parts(matching-partpart)))(recurremainingfinal-body-parts))))))(symmetrize-body-partsasym-hobbit-body-parts); => the following is the return value[{:name"head", :size3}{:name"left-eye", :size1}{:name"right-eye", :size1}{:name"left-ear", :size1}{:name"right-ear", :size1}{:name"mouth", :size1}{:name"nose", :size1}{:name"neck", :size2}{:name"left-shoulder", :size3}{:name"right-shoulder", :size3}{:name"left-upper-arm", :size3}{:name"right-upper-arm", :size3}{:name"chest", :size10}{:name"back", :size10}{:name"left-forearm", :size3}{:name"right-forearm", :size3}{:name"abdomen", :size6}{:name"left-kidney", :size1}{:name"right-kidney", :size1}{:name"left-hand", :size2}{:name"right-hand", :size2}{:name"left-knee", :size2}{:name"right-knee", :size2}{:name"left-thigh", :size4}{:name"right-thigh", :size4}{:name"left-lower-leg", :size3}{:name"right-lower-leg", :size3}{:name"left-achilles", :size1}{:name"right-achilles", :size1}{:name"left-foot", :size2}{:name"right-foot", :size2}]

Holy shipmates! This has a lot going on that we haven't discussed yet. So let's discuss it!

4.2. let

In our symmetrizer above, we saw the following:

(let [[part&remaining]remaining-asym-partsfinal-body-parts(conj final-body-partspart)]some-stuff)

All this does is bind the names on the left to the values on the right. You can think of let as short for "let it be", which is also a beautiful Beatles song (in case you didn't know (in which case, wtf?)). For example, "Let final-body-parts be (conj final-body-parts part)."

Here are some simpler examples:

(let [x3]x); => 3(def dalmatian-list["Pongo""Missis""Puppy 1""Puppy 2"]); and 97 more...(let [dalmatians(take 2dalmatian-list)]dalmatians); => ("Pongo" "Missis")

You can also use rest-params in let, just like you can in functions:

(let [[pongo&dalmatians]dalmatian-list][pongodalmatians]); => ["Pongo" ("Missis" "Puppy 1" "Puppy 2")]

Notice that the value of a let form is the last form in its body which gets evaluated.

let forms follow all the destructuring rules which we introduced in "Calling a Function" above.

One way to think about let forms is that they provide parameters and their arguments side-by-side. let forms have two main uses:

  • They provide clarity by allowing you to name things
  • They allow you to evaluate an expression only once and re-use the result. This is especially important when you need to re-use the result of an expensive function call, like a network API call. It's also important when the expression has side effects.

Let's have another look at the let form in our symmetrizing function so we can understand exactly what's going on:

;; Associate "part" with the first element of "remaining-asym-parts";; Associate "remaining" with the rest of the elements in "remaining-asym-parts";; Associate "final-body-parts" with the result of (conj final-body-parts part)(let [[part&remaining]remaining-asym-partsfinal-body-parts(conj final-body-partspart)](if (has-matching-part?part)(recurremaining(conj final-body-parts(matching-partpart)))(recurremainingfinal-body-parts)))

Notice that part, remaining, and final-body-parts each gets used multiple times in the body of the let. If, instead of using the names part, remaining, and final-body-parts we used the original expressions, it would be a mess! For example:

(let [[part&remaining]remaining-asym-partsfinal-body-parts(conj final-body-partspart)](if (has-matching-part?(first remaining-asym-parts))(recur(rest remaining-asym-parts)(conj (conj final-body-parts(first remaining-asym-parts))(matching-part(first remaining-asym-parts))))(recur(rest remaining-asym-parts)(conj final-body-parts(first remaining-asym-parts)))))

So, let is a handy way to introduce names for values.

4.3. loop

loop provides an efficient way to do recursion in Clojure. Let's look at a simple example:

(loop [iteration0](println (str "Iteration "iteration))(if (> iteration3)(println "Goodbye!")(recur(inc iteration)))); =>Iteration0Iteration1Iteration2Iteration3Iteration4Goodbye!

The first line, loop [iteration 0] begins the loop and introduces a binding with an initial value. This is almost like calling an anonymous function with a default value. On the first pass through the loop, iteration has a value of 0.

Next, it prints a super interesting little message.

Then, it checks the value of iteration - if it's greater than 3 then it's time to say goodbye. Otherwise, we recur. This is like calling the anonymous function created by loop, but this time we pass it an argument, (inc iteration).

You could in fact accomplish the same thing just using functions:

(defn recursive-printer([](recursive-printer0))([iteration](println iteration)(if (> iteration3)(println "Goodbye!")(recursive-printer(inc iteration)))))(recursive-printer); =>Iteration0Iteration1Iteration2Iteration3Iteration4Goodbye!

As you can see, this is a little more verbose. Also, loop has much better performance.

4.4. Regular Expressions

Regular expressions are tools for performing pattern matching on text. I won't go into how they work, but here's their literal notation:

;; pound, open quote, close quote#"regular-expression"

In our symmetrizer, re-find returns true or false based on whether the ;; the part's name starts with the string "left-":

(defn has-matching-part?[part](re-find #"^left-"(:namepart)))(has-matching-part?{:name"left-eye"}); => true(has-matching-part?{:name"neckbeard"}); => false

matching-part uses a regex to replace "left-" with "right-":

(defnmatching-part[part]{:name(clojure.string/replace(:namepart)#"^left-""right-"):size(:sizepart)})(matching-part{:name"left-eye":size1});=>{:name"right-eye":size1}]

4.5. Symmetrizer

Now let's analyze the symmetrizer fully. Note points are floating in the ocean, like ~~~1~~~:

(def asym-hobbit-body-parts[{:name"head":size3}{:name"left-eye":size1}{:name"left-ear":size1}{:name"mouth":size1}{:name"nose":size1}{:name"neck":size2}{:name"left-shoulder":size3}{:name"left-upper-arm":size3}{:name"chest":size10}{:name"back":size10}{:name"left-forearm":size3}{:name"abdomen":size6}{:name"left-kidney":size1}{:name"left-hand":size2}{:name"left-knee":size2}{:name"left-thigh":size4}{:name"left-lower-leg":size3}{:name"left-achilles":size1}{:name"left-foot":size2}])(defn has-matching-part?[part](re-find #"^left-"(:namepart)))(defn matching-part[part]{:name(clojure.string/replace(:namepart)#"^left-""right-"):size(:sizepart)}); ~~~1~~~(defn symmetrize-body-parts"Expects a seq of maps which have a :name and :size"[asym-body-parts]; (loop [remaining-asym-partsasym-body-parts; ~~~2~~~final-body-parts[]](if (empty?remaining-asym-parts); ~~~3~~~final-body-parts(let [[part&remaining]remaining-asym-parts; ~~~4~~~final-body-parts(conj final-body-partspart)](if (has-matching-part?part); ~~~5~~~(recurremaining(conj final-body-parts(matching-partpart))); ~~~6~~~(recurremainingfinal-body-parts))))))
This function employs a general strategy which is common in functional programming. Given a sequence (in this case, a vector of body parts and their sizes), continuously split the sequence into a "head" and a "tail". Process the head, add it to some result, and then use recursion to continue the process with the tail. Begin looping over the body parts. The "tail" of the sequence will be bound to remaining-asym-parts. Initially, it's bound to the full sequence passed to the function, asym-body-parts. Create a result sequence, final-body-parts; its initial value is an empty vector. If remaining-asym-parts is empty, that means we've processed the entire sequence and can return the result, final-body-parts. Otherwise, split the list into a head, part, and tail, remaining. Also, add part to final-body-parts and re-bind the result to the name final-body-parts. This might seem weird, and it's worthwhile to figure out why it works. Our growing sequence of final-body-parts already includes the body part we're currently examining, part. Here, we decide whether we need to add the matching body part to the list. If so, then add the matching-part to final-body-parts and recur. Otherwise, just recur.

If you're new to this kind of programming, this might take some time to puzzle out. Stick with it! Once you understand what's happening, you'll feel like a million bucks!

4.6. Shorter Symmetrizer with Reduce

The pattern of "process each element in a sequence and build a result" is so common that there's a function for it: reduce.

Here's a simple example:

;; sum with reduce(reduce + [1234]); => 10

This is like telling Clojure to do this:

So, reduce works by doing this:

Apply the given function to the first two elements of a sequence. That's where (+ 1 2) comes from. Apply the given function to the result and the next element of the sequence. In this case, the result of step 1 is 3, and the next element of the sequence is 3 as well. So you end up with (+ 3 3). Repeat step 2 for every remaining element in the sequence.

Reduce also takes an optional initial value. 15 is the initial value here:

If you provide an initial value, then reduce starts by applying the given function to the initial value and the first element of the sequence, rather than the first two elements of the sequence.

To further understand how reduce works, here's one way that it could be implemented:

(defn my-reduce([finitialcoll](loop [resultinitialremainingcoll](let [[current&rest]remaining](if (empty?remaining)result(recur(fresultcurrent)rest)))))([f[head&tail]](my-reducef(fhead(first tail))(rest tail))))

We could re-implement symmetrize as follows:

(defn better-symmetrize-body-parts"Expects a seq of maps which have a :name and :size"[asym-body-parts](reduce (fn [final-body-partspart](let [final-body-parts(conj final-body-partspart)](if (has-matching-part?part)(conj final-body-parts(matching-partpart))final-body-parts)))[]asym-body-parts))

Groovy!

4.7. Hobbit Violence

My word, this is truly Clojure for the Brave and True!

Now, let's create a function that will determine which part of the hobbit gets hit:

(defn hit[asym-body-parts](let [sym-parts(better-symmetrize-body-partsasym-body-parts)body-part-size-sum(reduce + 0(map :sizesym-parts))target(inc (rand body-part-size-sum))](loop [[part&rest]sym-partsaccumulated-size(:sizepart)](if (> accumulated-sizetarget)part(recurrest (+ accumulated-size(:sizepart)))))))(hitasym-hobbit-body-parts); => {:name "right-upper-arm", :size 3}(hitasym-hobbit-body-parts); => {:name "chest", :size 10}(hitasym-hobbit-body-parts); => {:name "left-eye", :size 1}

Oh my god, that poor hobbit! You monster!

5. What Now?

By this point I highly recommend actually writing some code to solidify your Clojure knowledge if you haven't started already. TheClojure Cheatsheet is a great reference listing all the built-in functions which operate on the data structures we covered.

One great place to start would be to factor out the loop in thehit function. Or, write out some Project Euler challenges. You can also check out 4Clojure, an online set of Clojure problems designed to test your knowledge. Just write something!

Gamasutra: Michael Gnade's Blog - Internet Entrepreneurship: How to Avoid Becoming a Stressed Out Loner

Scientist from Astana has solved one of the most difficult mathematical tasks of the millennium

$
0
0

Comments:" Scientist from Astana has solved one of the most difficult mathematical tasks of the millennium "

URL:http://www.inform.kz/eng/article/2619922


Scientist from Astana has solved one of the most difficult mathematical tasks of the millennium

ASTANA. January 10. KAZINFORM Mukhtarbai Otelbayev, a professor from Astana, has solved one of the seven most difficult mathematical tasks included in the number of "millennium problems". Previously such success was reached by Grigori Perelman who proved Poincaré conjecture, as is reported by the press service of the Eurasian National University.

Mukhtarbai Otelbayev, professor, Doctor of physics and mathematics of the National Academy of Sciences of the Republic of Kazakhstan, the director of the Eurasian mathematical institute of Gumilev ENU, has completed and published his work "Existence of the strong solution of Navier-Stokes equations" in public media. The importance of the publication is that this problem is included in the list of 7 most difficult mathematical tasks called "millennium problems". It's worth noting that the Clay Mathematics Institute announced the prize of 1 million USD for solution of each of these problems in the beginning of the year 2000. Until today only one of these problems was solved (Poincaré conjecture). Fields Medal for its solution was awarded to Grigori Perelman.

The press service notes that the sphere of scientific interest of Mukhtarbai Otelbayev includes spectral operator theory, operator narrowing and widening theory, functional spaces input theory, approximation theory, computing mathematics, inverse problems.

Mukhtarbai Otelbayev is the owner of the title "Scientific figure of the year" in the contest "Altyn adam" in the year 2002; in 2002-2003, and 2004-2005 he was the owner of the state scientific scholarship for scientists and specialists contributing to scientific and technical development; laureate of the premium of the Economical cooperation organization in the nomination "Science and technologies", 2004; and he is the owner of the grant from the Ministry of education and science of the Republic of Kazakhstan as "Best university professor", laureate of the state premium of the republic of Kazakhstan in the sphere of science and technology.
Source: astana.gov.kz

Teeworlds for Mac, Windows, and Linux.


Tortured by the Japanese in WW2, what happened when a former POW met his chief tormentor again 50 years later | Abroad in the Yard

$
0
0

Comments:"Tortured by the Japanese in WW2, what happened when a former POW met his chief tormentor again 50 years later | Abroad in the Yard"

URL:http://www.abroadintheyard.com/tortured-by-japanese-ww2-former-pow-met-chief-tormentor-again-50-years-later/


Eric Lomax, who died on Monday aged 93, was starved, viciously beaten and tortured as a prisoner of the Japanese during WW2.  Fifty years later, he was to meet his chief tormentor again.

An account of his story published in the Reader’s Digest in 1994 generated such interest that, a year later, he published his own memoir called The Railway Man.

Eric Lomax was born in Edinburgh on 30 May 1919.  Just before the outbreak of World War 2 in 1939, aged 19, he joined the Royal Corps of Signals.  Commissioned in December 1940, he was posted to Malaya in 1941, but his unit was soon in full retreat to Singapore, where he was captured by the Japanese in February 1942.  With thousands of his fellow prisoners, he undertook a forced march to Changi Prison, and was then transported 1,200 miles to Kanchanaburi, Thailand, and forced to work on the notorious Burma-Siam Railway.

By day, the prisoners laboured in temperatures exceeding 38°C.  By night, they slept on wooden planks in dismal bamboo huts.  Nearly all the men were depleted from malnutrition and disease, and they were dying by the score.

To obtain war news, Lomax and a few other prisoners had secretly built a radio receiver from scrap materials they collected.  They concealed it in a coffee tin and huddled around it at night.  Lomax also drew a map of the area around the railway to aid in possible escape attempts, gaining information from truck drivers, new prisoners, and Japanese maps whenever he had access to camp offices.  He hid his map in the latrine.  The radio went undetected for a few months until one morning when the Japanese conducted a surprise search of the huts.  It was discovered under the bunk of another prisoner, whose immediate punishment was to swing a 270lb sledgehammer onto a block of wood for hours at a time.

A few weeks later, Lomax and 4 of his radio ‘co-conspirators’ were ordered to gather their belongings to move to another camp in Kanchanaburi.  Lomax ducked into the latrine and grabbed the map.  When they arrived at the new camp, the prisoners were thrown to the ground and their meagre possessions ransacked.  A guard found Lomax’s map and they were ordered to stand at attention all day in the scorching sun, without food or water.  Finally, that night, one of the prisoners was ordered to raise his arms above his head. A soldier swung the wooden handle of a pickaxe down across the man’s back, knocking him to the ground. Other guards joined in, beating and kicking the man until he appeared lifeless. Another prisoner was similarly beaten.  Lomax was next.  Within seconds he was slammed to the ground, and his mouth filled with blood.  He felt boot heels on the back of his head, crunching his face into the gravel.  He heard the crack of his own bones.  The beating went on until he lost consciousness.

When Lomax woke the next morning, his body was numb.  The other four men were sprawled nearby, groaning.  They lay under the fierce sun for two days before fellow POWs were sent to carry them to the camp hospital, where a Dutch doctor treated them as best he could.  Lomax was in the worst condition.  His nose, arms, right hip and several ribs were broken. Bruises covered his body. “You men suffered the most horrendous beatings I have ever witnessed,” the doctor said. “I counted 900 blows over six hours.”

Two weeks after the beatings, with his arms encased in splints and bandages, Lomax was driven to the Japanese military-police headquarters in Kanchanaburi.  There, he was locked in a 5ft cage that soon became full of red ants, mosquitoes and his own filth.

Eventually, he was brought before a shaven-headed NCO, “his face full of violence” and an “almost delicate” interpreter, Takashi Nagase, for interrogation.  In fluent English, Nagase accused Lomax of ‘anti-Japanese activities’ and stated that he would be ‘killed shortly’.  Lomax remembered it as, “a flat neutral piece of information … I had just been sentenced to death by a man my own age who seemed completely indifferent to my fate. I had no reason to doubt him.”

Nagase said to Lomax, “We know you were involved in building and operating the radio – your friends confessed to your part in it. Now tell us: Who else was involved?”  Lomax refused to tell them.  They wanted to know why Lomax had secret map of the area around the railway, and where he had got the information to draw it.  Unsurprisingly, his explanation that he was a railway enthusiast who had simply mapped it for pleasure on his own devices did not convince them.

The interrogation went on for hours, then days.  Nagase was always on hand as interpreter.  Eventually the military policemen began to slap Lomax, and then deliver repeated blows to his face as his silence continued.  When the policemen stalked out of the room momentarily, Nagase whispered to Lomax, “If you confess, they’ll stop beating you.”  But Lomax remained defiant.

On the 5th day of interrogation, Lomax was accused of being a spy – a crime punishable by death.  When Nagase told him he had to sign a confession, Lomax again refused.  Lomax was dragged out to the banks of the River Kwai and was laid on his back on a bench.  One of his broken arms was pulled behind his back, the other across his chest, and he was tied down. He was in agony.

“Are you ready to talk?” Nagase asked.  Lomax shook his head.

A towel was put over his mouth and nose. Then one of the guards picked up a long rubber hose, turned a faucet on full force, and directed the stream onto the towel. The water soaked through, blocking Lomax’s mouth and nose.  He gagged and frantically gasped for breath as water filled his throat.  His stomach began to swell.  He was drowning on dry land.  When the towel was finally removed and Lomax had recovered from his delirium, he still refused to confess and name his confederates.  The water torture began once more.  At times, Lomax ended up crying out for his mother, unaware that she had died soon after his capture.

The interrogation and torture finally stopped after more than a week.  The Japanese had brought Lomax as close to death as possible, yet he showed no signs of giving in.  Nagase informed Lomax that he was being transferred out of the camp.  Expressing empathy with the prisoner, Nagase said, “Keep your chin up.”

It had no effect on Lomax, who was consumed with hatred for Nagase.  In Lomax’s mind, Nagase personified all the atrocities committed by the Japanese.  His was the voice that Lomax heard hour after hour, when the torture began and ended.  During the interrogations, Lomax memorised every feature of Nagase’s face: the dark eyes, the small nose, the broad forehead.  He wanted to remember him, and someday find him and make him pay.

Eric Lomax and Nagase Takashi during WW2

Lomax was tried in a court in Bangkok for his ‘crimes’ and sentenced him to 5 years’ hard labour.  He was sent to a disease-ridden prison in Singapore and twice feigned injury in order to be sent to hospital.  He stayed there until the war ended, when his suffering appeared to be over.

When Lomax finally returned home to Britain, he learned that his mother had died three years before, and his father had remarried. He was relieved to find, however, that his fiancée had waited for him.  They married three weeks after his arrival, and Lomax’s life seemed to settle into a comfortable routine.  He retired from the army in 1948, worked abroad for some years and later got a job teaching personnel management at Strathelyde University in Glasgow. He also became the father of two girls.

But his wartime past wouldn’t leave him.  The fractured bones in his right arm and wrist never set properly, making it painful for him to write.  He also had frequent nightmares in which he would see Nagase’s face and hear his voice.  He refused to talk about the war, reasoning that nobody would understand.  He would lose his temper over trivial matters, such as bureaucratic requests for personal information.  When his wife asked him what was wrong, Lomax remained tight-lipped and sullen. Finally the marriage ended.

In 1983, at the age of 64, Lomax married Patricia Wallace, a 46-year-old nurse.  Patti understood that her husband’s angry outbursts were related to his wartime experiences and assumed things would get better with time.  Unfortunately, matters grew worse and the flashbacks continued.  He even once refused to take a seat in a restaurant because a Japanese couple was eating nearby.  At his wife’s urging, Lomax contacted the Medical Foundation for the Care of Victims of Torture, and began treatments with a psychiatrist, talking about his experiences as a POW.  But, he remained darkly obsessed with his torturers, especially the interpreter.  He located and wrote other British survivors of Kanchanaburi, requesting information about the camp officials.  Nothing came of his efforts.  Then, in October 1989, a friend gave Lomax a newspaper clipping about the publication of Crosses and Tigers, a book by Takashi Nagase.

To Lomax’s amazement, the article explained how, “the author has flashbacks of the Japanese military police in Kanchanaburi torturing a POW accused of possessing a map.  One of their methods was to pour large amounts of water down his throat.”  The article spoke of Nagase’s remorse over Japanese atrocities and his public acts of atonement to the victims.

Lomax got a copy of Nagase’s book and found it very painful to read, especially the details of his interrogation and torture.  His wife suggested that he write to Nagase.  Lomax refused, but gave her grudging permission to send a letter on her own.

“I have just finished reading your book,” Patti wrote. “My husband is the man you describe being tortured so terribly”. She went on to say he had lived with many unanswered questions all these years and ended with a request: “if you are willing, perhaps you would correspond with my husband?”

Patti Lomax was moved to tears by Nagase’s reply. “I have suffered tremendous guilt all these years,” he wrote.  “I have often prayed I would meet your husband again and be able to seek forgiveness for what I assisted in.”

Convinced of Nagase’s sincerity, Patti suggested to her husband that he should write himself.  While Lomax was extremely reluctant to contact the very object of his hatred, Patti gently suggested, “Maybe it’s time to step out of the darkness.”

Eventually Lomax agreed that Nagase’s remorse must he genuine and replied with a note, “Perhaps a meeting would be good for us.  They agreed to meet at the World War 2 museum in Kanchanaburi on 26 March 1993 – almost 50 years after their first encounter.

Lomax travelled to the Far East with Patti.  On the day of the meeting, he nervously paced about the museum’s terrace.  Then he saw a slight Japanese man walking towards him. The face was much older, but still instantly recognisable.  The former interpreter identified Lomax just as quickly.  When Nagase reached Lomax, he bowed deeply. “I am so very sorry,” he said softly. “I would like…” His voice cracked, and he began to cry.  On instinct, Lomax put out his hand, and Nagase clasped it tightly.  They sat together in silence on a nearby bench. Finally Lomax spoke, “Do you remember what you told me when we last met?”

“No, I don’t,” Nagase replied.

“You said, ‘Keep your chin up.” Lomax paused, then smiled.

The tension began to vanish.  Over the next three days, the men talked about their lives since the war.  Their rapport grew easier with time.

The day before they were to part, the two men sat across from each other in silence. Then Lomax handed Nagase a letter he had written the night before. “I think you’d like to have this,” he said.

Nagase unfolded the page and read the words, “Although I can’t forget the ill treatment at Kanchanaburi, taking into account your change of heart, your apologies, the work you are doing, please accept my total forgiveness.”

Nagase looked up and grasped Lomax’s hand.  Both men had tears in their eyes.

“I’ve learned that hate is a useless battle,” Lomax said, “and it has to end sometime.”

The two men went on to become firm friends, and their remarkable story of reconciliation has been turned into a film, starring Colin Firth as Eric Lomax, with Nicole Kidman playing his wife.  Sadly, Mr Lomax will not now see the film’s release next year.

Eric Lomax and Nagase Takashi in 1998 at the River Kwai, Thailand

 

* Update January 2014 – the movie is out now in cinemas:

Anonymous hacks MIT website on anniversary of Aaron Swartz suicideTechie News

$
0
0

Comments:" Anonymous hacks MIT website on anniversary of Aaron Swartz suicideTechie News"

URL:http://www.techienews.co.uk/974669/anonymous-hacks-mit-website-anniversary-aaron-swartz-suicide/


Anonymous is at it again and has defaced the Cogeneration project page of MIT on the anniversary of Aaron Swartz suicide.

The project’s webpage is still defaced as of this writing and carries the title “THE DAY WE FIGHT BACK”.

This day exactly a year ago Aaron Swartz, the co-founder of Reddit, Creative Commons and Demand Progress, committed suicide in New York city, which his family believes was because of MIT and an overzealous Department of Justice prosecution.

Anonymous defaced the website as a part of Operation Last Resort, which is in retaliation for the suicide. “We decided to hack MIT again in 2014 on the anniversary with a second tribute to Aaron Swartz http://cogen.mit.edu/ #TheDayWefightback”, read a tweet from OpLastResort.

February 14 is commemorated as a protest day “The Day We Fight Back” in honor of Aaron Swartz to highlight and draw attention to the role played by the activist in the victory over the Stop Online Piracy Act.

This is not the first time MIT has been hacked as Anonymous, under its Operation Last resort, has targeted MIT two times prior to today’s hack. The first instance was just hours after Swartz’s suicide last year and the second hack, which was a rather huge one, was on January 22, 2013.

Update

Security expert Janne Ahlberg believes that hackers may have leveraged a SQL injection attack to hack the site.

Down The Rabbit Hole We Go! 300+ Mind Expanding Documentaries

$
0
0

Comments:"Down The Rabbit Hole We Go! 300+ Mind Expanding Documentaries"

URL:http://www.diygenius.com/mind-expanding-documentaries/


Down The Rabbit Hole We Go! 300+ Mind Expanding Documentaries

Posted by Kyle Pearce, In Fascinating Videos

I watch a lot of documentaries. I think they are incredible tools for learning and increasing our awareness of important issues. The power of an interesting documentary is that it can open our minds to new possibilities and deepen our understanding of the world.

On this list of mind expanding documentaries you will find different viewpoints, controversial opinions and even contradictory ideas. Critical thinking is recommended. I’m not a big fan of conspiracy documentaries but I do like films that challenge consensus reality and provoke us to question the everyday ideas, opinions and practices we usually take for granted.

Watching documentaries is one of my favorite methods of self-education. If I find a documentary really inspiring, I usually spend more time researching the different ideas and people interviewed that grab my attention in the film.

I hope you find these documentaries as enlightening as I did!

[1] Life In The Biosphere

Explore the wonder and interconnectedness of the biosphere through the magic of technology.

1. Home
2. How Many People Can Live on Planet Earth
3. The Magical Forest
4. Ants: Nature’s Secret Power
5. Mt. Everest: How It Was Made
6. Mariana’s Trench: The Deepest Spot On Earth
7. Natural World: The Andes
8. Shining Mountains: The Rockies
9. Grand Canyon: How It Was Made
10. The Kingdom of Plants

[2] Creativity and Design:

Learn about all the amazing things that people create with their imaginations.

1. Everything Is A Remix
2. The Creative Brain: How Insight Works
3. Blow Your Mind
4. Design: The New Business
5. PressPausePlay: Art and Creativity in the Digital Age
6. Infamy: A Graffiti Documentary
7. Influencers: How Trends and Creativity Become Contagious
8. RIP: A Remix Manifesto
9. Design: e² – Sustainable Architecture
10. The Genius Of Design

[3] The Education Industrial Complex:

The modern school where young minds are moulded into standardized citizens by the state.

1.  The College Conspiracy
2. Declining by Degrees: Higher Education at Risk
3. The Forbidden Education
4. Default: The Student Loan Documentary
5. College Inc.
6. Education For A Sustainable Future
7. Networked Society: The Future of Learning
8. The Ultimate History Lesson With John Taylor Gatto
9. Waiting For Superman
10. The War On Kids

[4] The Digital Revolution:

The Internet is now the driving force behind change and innovation in the world.

1. Download: The True Story of the Internet
2. The Age of Big Data
3. Resonance: Beings of Frequency
5. Life In A Day
6. Networked Society: On The Brink
7. Us Now: Social Media and Mass Collaboration
8. WikiRebels: The WikiLeaks Story
9. The Virtual Revolution: The Cost of Free
10. How Hackers Changed the World

[5] A New Civilization:

We are at the dawn of a new golden age of human inventiveness and social justice.

1. THRIVE: What On Earth Will It Take?
2. Zeitgeist III: Moving Forward
3. Paradise or Oblivion
4. 2012: Time For Change
5. The Crisis of Civilization
6. The Collective Evolution II
7. The Quickening: Awakening As One
8. 2012 Crossing Over: A New Beginning
9. Collapse
10. The Awakening

[6] Politics:

Explore the politics of power and control and how it affects your life and community.

1. Owned and Operated
2. UnGrip
3. The Power Principle
4. The True Story of Che Guevara
5. Earth Days
6. Capitalism Is The Crisis
7. WikiLeaks: The Secret Life of a Superpower
8. The Putin System
9. The War On Democracy
10. Rise Like Lions: Occupy Wall Street and the Seeds of Revolution

[7] Biographies of Genius:

The biographies of modern geniuses who pushed humanity forward.

1. Isaac Newton: The Last Magician
2. Nikola Tesla: The Greatest Mind of All Time
3. The Unlimited Energy of Nicola Tesla
4. The Missing Secrets Of Nikola Tesla
5. Richard Feynman: No Ordinary Genius
6. How Albert Einstein’s Brain Worked
7. The Extraordinary Genius of Albert Einstein
8. The Biography of Albert Einstein
9. Da Vinci: Unlocking The Genius
10. Leonardo Da Vinci: The Man Who Wanted to Know Everything

[8] War:

War is history’s oldest racket for stealing from the powerless and redistributing resources to the powerful.

1. Psywar: The Real Battlefield Is Your Mind
2. The History of World War II
3. The Secret History of 9/11
4. Robot Armies in the Future
5. The Never Ending War in Afghanistan
6. Shadow Company: Mercenaries In The Modern World
7. World War II From Space
8. Why We Fight
9. The Fog Of War
10. The Oil Factor: Behind The War On Terror

[9] Economics:

Learn about the financial system works and how people and societies are enslaved through debt.

1. The Corporation: The Pathological Pursuit of Profit and Power
2. Overdose: The Next Financial Crisis
3. The Ascent of Money: A Financial History of The World
4. The One Percent
5. Quants: The Alchemists of Wall Street
6. The Last Days Of Lehman Brothers
7. The Four Horsemen
8. Inside Job: The Biggest Robbery In Human History
9. Capitalism A Love Story
10. Money and Life

[10] Digital Entrepreneurship:

Profiles of the entrepreneurs who have led the digital revolution.

1. The Life Of A Young Entrepreneur
2. Profile: Google’s Larry Page and Sergey Brin
3. Profile: Facebook’s Mark Zuckerberg
4. Starting-Up in America
5. The Biography of Bill Gates
6. Inside Google: The Billion Dollar Machine
7. Steve Jobs: One Last Thing
8 . Steve Jobs: The Billion Dollar Hippy
9. Elon Musk: Risk Takers
10. The Story of Twitter

[11] Sports:

Watch the inspiring stories of amazing athletes and people who live for the adrenalin rush of extreme sports.

1. Fearless: The Jeb Corliss Story
2. Carts of Darkness
3. The Two Escobars
4. Usain Bolt: The World’s Fastest Man
5. Wayne Gretzky: The Life and Times
6. When We Were Kings
7. Mike Tyson: Beyond the Glory
8. Birdmen
9. The Legacy Of Michael Jordan
10. We Ride: The Story of Snowboarding

[12] Technology:

Find out more about the impact of exponential growth and the approaching singularity.

1. Ray Kurzweil: The Transcendent Man
2. How Robots Will Change the World
3. Human 2.0
4. Tomorrow’s World: Life In The Future
5. Trance-Formation: The Future of Humanity
6. The Venus Project: Future By Design
7. Bionics, Transhumanism And The End Of Evolution
8. The Singularity Is Near
9. The Technology Of The Future
10. Powering The Future: The Energy Revolution

[13] Origins of Religion:

Explore the original religious experience of mankind at the dawn of civilization.

1. Entheogen: Awakening the Divine Within
2. Manifesting the Mind: Footprints of the Shaman
3. Ancient Egypt and The Alternative Story of Mankind’s Origins
4. The Hidden Knowledge of the Supernatural
5. Re-Awaken: Open Your Heart, Expand Your Mind
6. Astromythology: The Astronomical Origins of Religion
7. The Root of All Evil: The God Delusion
8. Ancient Knowledge
9. The Naked Truth
10. Before Babel: In Search of the First Language

[14] Western Religion:

The fascinating history of the three Abrahamic religions: Judaism, Christianity and Islam.

1. Secret Quest: The Path of the Christian Gnostics
2. The Secret Gate of Eden
3. Forbidden Knowledge: Lost Secrets of the Bible
4. Banned From The Bible: Secrets Of The Apostles
5.  The Life of Prophet Muhammad
6. The Road To Armageddon
7. The Most Hated Family In America
8. Muhammad: The Legacy of a Prophet
9. A Complete History of God
10. Gnosis: The Untold History of the Bible

[15] Eastern Religion:

Expand your mind by also studying the entirely different religious worldviews of the East.

1. Inner Worlds, Outer Worlds
2. The Life Of The Buddha
3. The Seven Wonders of the Buddhist World
4. Mysteries of the Cosmic OM: Ancient Vedic Science
5. Where Science and Buddhism Meet
6. The Yogis of Tibet
7. Taj Mahal: Secrets To Blow Your Mind
8. Light at the Edge of the World: Tibetan Science of the Mind
9. Myths of Mankind: The Mahabharata
10. Ayurveda: The Art of Being

[16] Consciousness:

Learn about the basic unity of existence and the miracle of consciousness that makes us self-aware.

1. Athene’s Theory of Everything
2. Theory of Everything: GOD, Devils, Dimensions, Dragons & The Illusion of Reality
3. The God Within: Physics, Cosmology and Consciousness
4. 5 Gateways: The Five Key Expansions of Consciousness
5. Return to the Source: Philosophy and The Matrix
6. The Holographic Universe
7. DMT: The Spirit Molecule
8. What Is Consciousness?
9. Kymatica
10. Neuroplasticity: The Brain That Changes Itself

[17] Mysteries:

Indiana Jones-style explorations into the unsolved mysteries of the past.

1. Alchemy: Sacred Secrets Revealed
2. The Day Before Disclosure
3. The Pyramid Code
4. The Secret Design of the Egyptian Pyramids
5. Decoding the Past: Secrets of the Dollar Bill
6. The Lost Gods of Easter Island
7. Origins of the Da Vinci Code
8. Forbidden Knowledge: Ancient Medical Secrets
9. Secret Mysteries of America’s Beginnings: The New Atlantis
10. Secrets in Plain Sight

[18] Mass Culture:

Learn about how our thoughts and opinions are influenced by mass culture.

1. The Century of the Self
2. All Watched Over By Machines Of Loving Grace
3. The Power Of Nightmares
4. The Trap: What Happened To Our Dreams of Freedom
5. Starsuckers: A Culture Obsessed By Celebrity
6. Human Resources: Social Engineering in the 20th Century
7. Obey: The Death of the Liberal Class
8. Motivational Guru: The Story of Tony Robbins
9. Bob Marley: Freedom Road
10. Radiant City

[19] Corporate Media:

Discover how the mass media and advertisers channel our irrational impulses.

1. Weapons of Mass Deceptions
2. Secrets of the Superbrands
3. Orwell Rolls in his Grave
4. The Esoteric Agenda
5. Propaganda
6. The Myth of the Liberal Media: The Propaganda Model of News
7. Manufacturing Consent: Noam Chomsky and the Media
8. Symbolism in Logos: Subliminal Messages or Ancient Archetypes
9. Edward Snowden: A Truth Unveiled
10. Outfoxed: Rupert Murdoch’s War on Journalism

[20] Art and Literature:

Explore the lives of famous artists and the world-changing ideas that sprang from their minds.

1. Lord Of The Rings: Facts Behind The Fiction
2. Cosm: Alex Gray’s Visionary Art
3. Banksy’s Exit Through The Gift Shop
4. New Art and the Young Artists Behind It
5. Salvador Dali: A Master of the Modern Era
6. How Art Made The World: More Human Than Human
7. The Day Pictures Were Born
8. Guns, Germs and Steel
9. Off-Book: Digital Age Creativity
10. This Is Modern Art

[21] Health:

Explore human health, how the body works and the incredible power of our brains.

1. Science And The Human Body
2. The Truth About Vitamins
3. How To Live To 101
4. America’s Obesity Epidemic
5. The War On Health
6. The Beautiful Truth
7. Food Inc.
8. The Truth About Food
9. Addicted To Pleasure: Sugar
10. The Living Matrix

[22] Drugs:

Documentaries on the effect of drugs, legal and illegal, on the body and mind.

1. The Union: The Business Behind Getting High
2. The Drugging Of Our Children
3. How Marijuana Affects Your Health
4. Making a Killing: The Untold Story of Psychotropic Drugging
5. Clearing the Smoke: The Science of Cannabis
6. LSD: The Beyond Within
7. The War on Drugs: The Prison Industrial Complex
8. Are Illegal Drugs More Dangerous Than Legal Drugs?
9. The Prescription Drug Abuse Epidemic
10. Run From The Cure: The Rick Simpson Story

[23] Environment:

Thought-provoking documentaries on the environmental movement and the growing threats to our biosphere.

1. Earthlings
2. Blue Gold: World Water Wars
3. Tapped
4. Shift: Beyond the Numbers of the Climate Crisis
5. All Things Are Connected
6. The Fight For Amazonia
7. Flow: For Love Of Water
8. Here Comes the Sun
9. The World According To Monsanto
10. The Story of Stuff

[24] Cosmos:

Expand your mind by exploring our indescribably large and beautiful Cosmos.

1. The Search for Planet Similar to Earth
2. Inside the Milky Way Galaxy
3. Cosmic Journeys : The Largest Black Holes in the Universe
4. Beyond The Big Bang
5. The Mystery of the Milky Way
6. Fractals: The Hidden Dimension
7. Into The Universe With Stephen Hawking: The Story of Everything
8. Pioneer Science: Discovering Deep Space
9. Carl Sagan’s Cosmos
10. The Strangest Things In The Universe

[25] Science:

The history of scientific discovery and how scientific instruments expand our perception.

1. The Complete History of Science
2. The Quantum Revolution
3. Secret Universe: The Hidden Life of the Cell
4. Stephen Hawking: A Brief History of Time
5. Quantum Mechanics: Fabric of the Cosmos
6. The Light Fantastic
7. DNA: The Secret of Life
8. Parallel Universes, Alternative Timelines & Multiverse
9. What Is The Higgs Boson?
10. Infinity

[26] Evolution:

The story of our evolution and the emergence of self-aware human beings.

1. The Origin of Life
2. Homo Sapiens: The Birth of Humanity
3. Beyond Me
4. The Global Brain
5. Metanoia: A New Vision of Nature
6. Birth Of A New Humanity
7. Samsara
8. Baraka
9. The Incredible Human Journey
10. The Human Family Tree

[27] Psychology and The Brain:

New research is shining light on how our brains function and the ways we can change our minds.

1. How Smart Can We Get?
2. The Science of Lust
3. The Secret You
4. What Are Dreams?
5. A Virus Called Fear
6. Beyond Thought (Awareness Itself)
7. The Human Brain
8. Superconscious Mind: How To Double Your Brain’s Performance
9. How Does Your Memory Work?
10. Secrets of the Mind

[28] Modern History:

The story of the enlightenment, the industrial revolution and the rise of the modern world.

1. The Entrepreneurs Who Built America
2. History of the World in Two Hours
3. The Industrial Revolution
4. The Rise and Fall of the Third Reich
5. The Adventure of the English Language
6. The French Revolution
7. Big Sugar
8. The Spanish Inquisition
9. The American Revolution
10. The Mexican American War

[29] Pre-Modern History:

The story of the new world and European history in the middle ages.

1. America Before Columbus
2. The Dark Ages
3. Socrates, Aristotle and Plato
4. The Medici: The Most Influencial Family In The World
5. Rome: The Rise And Fall Of An Empire
6. History of Britain: The Myth of the Anglo-Saxon Invasion
7.  A History of Celtic Britain
8. The Crusades: Victory and Defeat
9. The Vikings: Voyage To America
10. Copernicus and the Scientific Revolution

[30] Current Events:

Become more informed about current events that are shaping the world.

1. Syria: The Reckoning
2. Empire: Putin’s Russia
3. The New Arms Race
4. The Killing of Yasser Arafat
5. Egypt In Crisis
6. Inside Obama’s Presidency
7. The Untouchables: How Obama Protected Wall Street
8. Behind The Rhetoric: The Real Iran
9. A History of the Middle East since WWII
10. Climate Wars

[31] Ancient Civilizations:

Fascination explorations into the ancient civilizations of our past.

1. When God Was a Girl: When Goddesses Ruled The Heavens and Earth
2. The Persian Empire : Most Mysterious Civilization in the Ancient World
3. What The Ancients Did For Us
4. What the Ancients Knew
5. Egypt: Beyond the Pyramids
6. Secrets of the Ancient Empires
7. Constellations & Ancient Civilizations
8. Graham Hancock’s Quest For The Lost Civilization
9. Atlantis: The Lost Continent
10. Seven Wonders of the Ancient World

I hope you found this list of mind expanding documentaries useful. If have your own personal favourite, I’d love to hear your top 5 documentaries in the comments.

Join our DIY Education newsletter and get instant access to the HyperLearning Toolkit. It features a 100+ resources and tools for self-education and learning today’s most in-demand digital skills.

Article 10

$
0
0

Comments:""

URL:http://ethereum.org/ethereum.html


In the last few months, there has been a great amount of interest into the area of using Bitcoin-like blockchains, the mechanism that allows for the entire world to agree on the state of a public ownership database, for more than just money. Commonly cited applications include "colored coins", the idea of using on-blockchain digital assets to represent custom currencies and financial instruments, "smart property", physical objects such as cars which track a colored coin on a blockchain to determine their present legitimate owner, as well as more advanced applications such as decentralized exchange, financial derivatives, peer-to-peer gambling and on-blockchain identity and reputation systems. Perhaps the most ambitious of all is the concept of "autonomous agents" or "decentralized autonomous corporations" - autonomous entities that operate on the blockchain without any central control whatsoever, eschewing all dependence on legal contracts and organizational bylaws in favor of having resources and funds autonomously managed by a self-enforcing smart contract on a cryptographic blockchain.

However, most of these applications are difficult to implement today, simply because the scripting systems of Bitcoin, and even proto-cryptocurrency-2.0 alternatives such as the Bitcoin-based colored coins protocol and so-called "metacoins", are far too limited to allow the kind of arbitrarily complex computation that DACs require. What this project intends to do is take the innovations that such protocols bring, and generalize them - create a fully-fledged, Turing-complete (but heavily fee-regulated) cryptographic ledger that allows participants to encode arbitrarily complex contracts, autonomous agents and relationships that will be mediated entirely by the blockchain. Rather than being limited to a specific set of transaction types, users will be able to use Ethereum as a sort of "Minecraft of crypto-finance" - that is to say, one will be able to implement any feature that one desires simply by coding it in the protocol's internal scripting language. Custom currencies, financial derivatives, identity systems and decentralized organizations will all be easy to do, and it will also be possible to construct transaction types that even the Ethereum developers did not imagine. Altogether, we believe that this design is a solid step toward the realization of "cryptocurrency 2.0"; we hope that Ethereum will be to cryptocurrency what Web 2.0 was to the World Wide Web circa 1995.

Why A New Protocol

When one wants to create a new application, especially one in so delicate an area as cryptography or cryptocurrency, the immediate, and correct, first instinct is to use existing protocols as much as possible. There is no need to create a new currency, or even a new protocol, when the problem can be solved entirely by using existing technologies. Indeed, the puzzle of attempting to solve the problems of smart property, smart contracts and decentralized autonomous corporations on top of Bitcoin is how our interest in cryptocurrency 2.0 technologies originally started. Over the course of our research, however, it became evident that while the Bitcoin protocol is more than adequate for currency, basic multisignature escrow and certain simple versions of smart contracts, there are fundamental limitations that make it non-viable for anything beyond a certain very limited scope of features.

Colored Coins

The first attempt to implement a system for managing smart property and custom currencies and assets on top of a blockchain was built as a sort of overlay protocol on top of Bitcoin itself, with many advocates making a comparison to the way that HTTP serves as a layer on top of TCP in the internet protocol stack. The colored coins protocol is roughly defined as follows:

A colored coin issuer determines that a given transaction output H:i (H being the transaction hash and i the output index) represents a certain asset, and publishes a "color definition" specifying this transaction output alongside what it represents (eg. 1 satoshi from H:i = 1 ounce of gold redeemable at amagimetals.com). Others "install" the color definition file in their colored coin clients. When the color is first released, output H:i is the only transaction output to have that color. If a transaction spends inputs with color X, then its outputs will also have color X. For example, if the owner of H:i immediately makes a tranasction to split that output among five addresses, then those transaction outputs will all also have color X. If a transaction has inputs of different colors, then a "color transfer rule" or "color kernel" determines which colors which outputs are (eg. a very naive implementation may say that output 0 has the same color as input 0, output 1 the same color as input 1, etc). When a colored coin client notices that it received a new transaction output, it uses a back-tracing algorithm based on the color kernel to determine the color of the output. Because the rule is deterministic, all clients will agree on what color (or colors) each output has.

However, the protocol has several fundamental flaws:

Difficulty of simplified payment verification - Bitcoin's Merkle tree construction allows for a protocol known as "simplified payment verification", where a client that does not download the full blockchain can quickly determine the validity of a transaction output by asking other nodes to provide a series of hashes starting from the transaction hash, going up the Merkle tree and culminating at the root hash in the block header. The client will still need to download the block headers to be secure, but the amount of data bandwidth and verification time required drops by a factor of nearly a thousand. With colored coins, this is much harder. The reason is that one cannot determine the color of a transaction output simply by looking up the Merkle tree; rather, one needs to employ the backward scanning algorithm, fetching potentially hundreds of transactions and requesting a Merkle tree validity proof of each one, before a client can be fully satisfied that a transaction has a certain color. After over a year of investigation, including help from ourselves, no solution has been found to this problem. Incompatibility with scripting - as mentioned above, Bitcoin does have a moderately flexible scripting system, for example allowing users to sign transactions of the form "I release this transaction output to anyone who is willing to pay to me 1 BTC". Other examples include assurance contracts, efficient micropayments and on-blockchain auctions. However, this system is inherently not color-aware; that is to say, one cannot make a transaction of the form "I release this transaction output to anyone who is willing to pay me one gold coin defined by the genesis H:i", because the scripting language has no idea that there even are different colors. One major consequence of this is that, while trust-free swapping of two different colored coins is possible, a full decentralized exchange is not since there is no way to place an order to buy or sell that is enforceable. Same limitations as Bitcoin - theoretically, on-blockchain protocols can support advanced derivatives, bets and various kinds of conditional transfers that will be described in more detail later in this paper. Colored coins inherits the limitations of Bitcoin in terms of the impossibility of many such arrangements.

Metacoins

Another concept, once again in the spirit of sitting on top of Bitcoin much like HTTP over TCP, is that of "metacoins". The concept of a metacoin is simple: the metacoin protocol provides for a way of encoding metacoin transaction data into the outputs of a Bitcoin transaction, and a metacoin node works by processing all Bitcoin transactions and evaluating Bitcoin transactions that are valid metacoin transactions in order to determine the current balance sheet at any given time. For example, a simple metacoin protocol might require a transaction to have four outputs: MARKER, FROM, TO and VALUE. MAKRER would be a specific marker address to identify a transaction as a metacoin transaction, FROM would be the address that coins are sent from, TO would be the address that coins are sent to and VALUE would be an address encoding the amount sent. Because the Bitcoin protocol is not metacoin-aware, and thus will not reject invalid metacoin transactions, the metacoin protocol must treat all transactions with the first output going to MARKER as valid and react accordingly. For example, an implementation of the relevant part of this metacoin protocol might look like this:

if tx.output[0] != MARKER:
 break
else if balance[tx.output[1]] < decode_value(tx.output[3]):
 break
else if not tx.hasSignature(tx.output[1]):
 break
else:
 balance[tx.output[1]] -= decode_value(tx.output[3]);
 balance[tx.output[2]] += decode_value(tx.output[3]);

The advantage of a metacoin protocol is that it can allow for more advanced transaction types, including custom currencies, decentralized exchange, derivatives, etc, that are impossible on top of Bitcoin itself. However, metacoins on top of Bitcoin have one major flaw: simplified payment verification, already difficult with colored coins, is outright impossible on a metacoin. The reason is that while one can use SPV to determine that there is a transaction sending 30 metacoins to address X, that by itself does not mean that address X has 30 metacoins; what if the sender of the transaction did not have 30 metacoins to start with and so the transaction is invalid? Finding out any part of the current state essentially requires scanning through all transactions going back to the metacoin's original launch to figure out which transactions are valid and which ones are not. This makes it impossible to have a truly secure client without downloading the entire 12 GB Bitcoin blockchain.

Ethereum solves both of these issues by being hosted on its own blockchain, and by storing a distinct "state tree" representing the current balance of each address and a "transaction list" representing the transactions between the current block and the previous block in each block. Ethereum contracts are allowed to have an arbitrarily large memory, and the Turing-complete scripting language makes it possible to encode an entire currency inside of a single contract. The intention of Ethereum is not to replace colored coins and metacoins by offering superior features; rather, Ethereum intends to serve as a superior foundational layer offering a uniquely powerful scripting system on top of which arbitrarily advanced contracts, currencies and other decentralized applications can be built. If existing colored coins and metacoin projects were to move onto Ethereum, they would gain the benefits of Ethereum's simplified payment verification, compatibility with financial derivatives and decentralized exchange, and the ability to work together on a single network. With Ethereum, someone with an idea for a new contract or transaction type that might drastically improve the state of what can be done with cryptocurrency would not need to start their own coin; they could simply implement their idea in Ethereum script code.

Philosophy

The design behind Ethereum is intended to follow the following principles:

Simplicity - the Ethereum protocol should be as simple as possible, even at the cost of some data storage inefficiency or time inefficiency. An average programmer should ideally be able to follow and implement the entire specification, so as to eventually help minimize the influence that any specific individual or elite group can have on the protocol and furthering the vision of Ethereum as a protocol that is open to all. Optimizations which add complexity should not be included unless they provide very substantial benefit. Universality - it is a fundamental part of Ethereum's design philosophy that Ethereum does not have "features". Instead, Ethereum provides an internal Turing-complete scripting language which you can use to construct any smart contract or transaction type that can be mathematically defined. Want to invent your own financial derivative? With Ethereum, you can. Want to make your own currency? Set it up as an Ethereum contract. Want to set up a full-scale Daemon or Skynet? You may need to have a few thousand interlocking contracts, and be sure to feed them generously, to do that, but nothing is stopping you. Modularity - different parts of the Ethereum protocol should be designed to be as modular and separable as possible. Over the course of development, it should be easy to make a small protocol modification in one place and have the application stack continue to function without any further modification. Innovations such as Dagger, Patricia trees and RLP should be implemented as separate libraries and made to be feature-complete even if Ethereum does not require certain features so as to make them usable in other protocols as well. Ethereum development should be maximally done so as to benefit the entire cryptocurrency ecosystem, not just itself. Non-discrimination - the protocol should not attempt to actively restrict or prevent specific categories of usage, and all regulatory mechanisms in the protocol should be designed to directly regulate the harm, not attempt to oppose specific undesirable applications. You can even run an infinite loop script on top of Ethereum for as long as you are willing to keep paying the per-computational-step transaction fee for it.

Basic Building Blocks

At its core, Ethereum starts off as a fairly regular proof-of-work mined cryptocurrency without many extra complications; in fact, Ethereum is in many ways simpler than the Bitcoin-based cryptocurrencies that we use today. The concept of a transaction having multiple inputs and outputs, for example, is gone, replaced by a more intuitive balance-based model (to prevent transaction replay attacks, as part of each account balance we also store an incrementing nonce). Sequence numbers and lock times are also removed, and all transaction and block data is encoded in a single format. Instead of addresses being the RIPEMD160 hash of the SHA256 hash of the public key prefixed with 04, addresses are simply the last 20 bytes of the SHA3 hash of the public key. Unlike other cryptocurrencies, which aim to have "features", Ethereum intends to take features away, and instead provide its users with near-infinite power through an all-encompassing mechanism called "contracts".

Ethereum Client P2P Protocol

The Ethereum client P2P protocol is a fairly standard cryptocurrency protocol, and can just as easily be used for any other cryptocurrency; the only modification is the introduction of the "Greedy Heaviest Observed Subtree" (GHOST) protocol first introduced by Yonatan Sompolinsky and Aviv Zohar in December 2013; the motivation for and implementation of GHOST will be described in more detail below. The Ethereum client will be entirely reactive; it will not do anything by itself (except for the networking daemon maintaining connections) unless provoked. However, the client will also be more powerful; unlike bitcoind, which only stores a limited amount of data about the blockchain, the Ethereum client will also act as a fully functional backend for a block explorer.

When the client reads a message, it will perform the following steps:

Hash the data, and check if the data with that hash has already been received. If so, exit. Otherwise, pass it along to the data parser. If the data parser says that a data item is a valid block, go to step 3. If the data parser says that a data item is a transaction, see if there are enough funds in the sending address for the transaction to go through, and if there are add it to the local transaction list and publish it to the network. If the data parser says that a data item is a message, send the message to the message responder and return the response. Check if the "parent" parameter in the block is already stored in the database. If it is not, exit Check if every block header in the "uncles" parameter in the block has the block's parent as its own parent. If any is not, exit. Note that uncle blocks do not need to be in the database; they just need to have valid proof of work. Call the state updater with arguments (1) the parent of the block, (2) the transaction list of the block, (3) the timestamp of the block and (4) the coinbase of the block and see if the block header outputted by the state updater is exactly the same. If not, exit. If yes, add the block to the database and publish it to the network. Determine TD(block) ("total difficulty") for the new block. TD is defined recursively by TD(genesis_block) = 0 and TD(B) = TD(B.parent) + sum(u.difficulty for u in B.uncles) + B.difficulty. If the new block has higher TD than the current block, set the current block to the new block.

If the node is mining, the node performs the following additional steps upon receiving a block:

Determine if the block's parent is in the database. If not, discard it. Determine TD(block) ("total difficulty"). If TD(block) is higher than of any existing block in the database, start mining on that block and clear all uncles. Also, remove all transactions in the new block from the transaction pool if present, and immediately apply the remaining transactions to the new block. If the block's parent is the parent of the highest block, add the block header to the set of uncles and restart the proof of work.

Upon receiving a transaction, the miner should apply the transaction to the current block and then start mining the new block header after the current mining round finishes.

The motivation behind GHOST is that blockchains with fast confirmation times currently suffer from reduced security due to a high stale rate; because blocks take a certain time to propagate through the network, call it t, if miner A mines a block and then miner B happens to mine another block before miner A's block propagates to B, miner B's block will end up wasted and will not contribute to network security. Furthermore, there is a centralization issue: if miner A is a mining pool with 30% hashpower and B has 10% hashpower, A will have a risk of producing stale blocks 70% of the time whereas B will have a risk of producing stale blocks 90% of the time. Thus, if the stale rate is high, A will be substantially more efficient simply by virtue of its size. With these two effects combined, blockchains which produce blocks quickly are very likely to lead to one mining pool having enough percentage of the network to have de facto control over the mining process. GHOST solves the first issue of network security loss by including stale blocks in the calculation of which chain is the "longest". Ethereum only does GHOST down one level for simplicity (ie. stale blocks must be included as uncles in the next block in order to count), but this should have 90%+ of the benefit of GHOST. Also, in Ethereum we give stales 3/4 of their block reward (and the nephew that includes them 1/8 as a reward); this modification is intended to solve the second issue of centralization bias.

Currency and Issuance

The Ethereum network includes its own built-in currency, ether. The main reason for including a currency in the network is twofold. First, like Bitcoin, ether is rewarded to miners to as to incentivize network security. Second, it serves as a mechanism for paying transaction fees for anti-spam purposes. Of the two main alternatives to fees, per-transaction proof of work similar to Hashcash and feeless laissez-faire, the former is wasteful of resources and unfairly punitive against weak computers and smartphones and the latter would lead to the network being almost immediately overwhelmed by an infinitely looping "logic bomb" contract. Ether will have a theoretical hard cap of 2128 units (compare 250.9 in BTC), although not more than 2100 units will be released in the foreseeable future. For convenience and to avoid future argument (see the current mBTC/uBTC/satoshi debate), the denominations will be pre-labelled:

  • 1: wei
  • 10^3: __
  • 10^6: __
  • 10^9: koblitz
  • 10^12: szabo
  • 10^15: finney
  • 10^18: ether

This should be taken as an expanded version of the concept of "dollars" and "cents" or "BTC" and "satoshi" that is intended to be future proof; only szabo, finney and ether will likely be used in the foreseeable future. The right to name the 103 and 106 units will be left as a high-level secondary reward for the fundraiser subject to pre-approval from ourselves.

After 1 year After 5 years
Currency units 2X 4X
Fundraiser 50% 25%
Founders 12.5% 6.25%
Reserve 12.5% 6.25%
Miners 25% 62.5%

The issuance model will be as follows:

  • Ether will be sold in a Mastercoin-style fundraiser at the price of 1 ether for 0.0001 BTC. Suppose that X ether gets collected in this way.
  • 0.25X ether will be given to the founders.
  • 0.25X ether will be given to the Ethereum organization as a reserve pool to pay expenses in ETH such as ETH salaries or bounties for those developers who want part or all of their compensation to be in this form
  • 0.5X ether will be mined per year forever after that point (ie. permanent linear inflation)

For example, after five years and assuming no transactions, 25% of the ether will be in the hands of the fundraiser participants, 6.25% in the founder pool, 6.25% paid to the reserve pool, and 62.5% will belong to miners. The linear model reduces the risk of what some see as excessive wealth concentration in Bitcoin, and gives individuals living in present and future eras a fair chance to acquire currency units, while at the same time retaining a strong incentive to invest because the inflation "rate" still tends to zero over time. We also theorize that because coins are always lost over time, and coin loss can be modeled as a percentage of the total supply per year, that the total currency supply in circulation will in fact eventually stabilize at a value equal to the annual issuance divided by the loss rate (eg. 200x annual issuance at a loss rate of 0.5%).

Data Format

All data in Ethereum will be stored in recursive length prefix encoding, which serializes arrays of strings of arbitrary length and dimension into strings. For example, ['dog', 'cat'] is serialized (in byte array format) as [ 130, 67, 100, 111, 103, 67, 99, 97, 116]; the general idea is to encode the data type and length in a single byte followed by the actual data (eg. converted into a byte array, 'dog' becomes [ 100, 111, 103 ], so its serialization is [ 67, 100, 111, 103 ]. Note that RLP encoding is, as suggested by the name, recursive; when RLP encoding an array, one is really encoding a string which is the concatenation of the RLP encodings of each of the elements. In the event that storing a number is required, the number will be stored in big-endian base 256 format (eg. 32767 in byte array format as [ 127, 255 ]).

A full block is stored as:

[ block_header, transaction_list , uncle_list ]

The block header is:

[ parent hash, sha3(rlp_encode(uncle_list)), coinbase address, state_root, sha3(rlp_encode(transaction_list)), difficulty, timestamp, nonce ]

Where the data for the proof of work is the RLP encoding of the block WITHOUT the nonce. uncle_list and transaction_list are the lists of the uncle block headers and tranactions in the block, respectively.

The state_root is the root of a Merkle Patricia tree containing (key, value) pairs for all addresses where each address is represented as a 20-byte binary string. At each address, the value stored in the Merkle Patricia tree is a string which is the RLP-serialized form of an object of one of the following two forms:

[ balance, nonce ]
[ balance, nonce, contract_root ]

The nonce is the number of transactions made from the address, and is incremented every time a transaction is made. The purpose of this is to (1) make each transaction valid only once to prevent replay attacks, and (2) to make it impossible (more precisely, cryptographically infeasible) to construct a contract with the same hash as a pre-existing contract. balance refers to the contract or address's balance, denominated in wei. contract_root is the root of yet another Patricia tree, conaining the contract's memory, if that address is controlled by a contract. Note that in the main Patricia tree all addresses have a length of 20, even if they start with one or more zero bytes, and in the contract subtrees all indices have a length of 32, prepending with zero bytes if necessary.

Mining algorithm

Over the past five years of experience with Bitcoin and alternative cryptocurrencies, one important property for proof of work functions that has been discovered is that of "memory-hardness" - computing a valid proof of work should require not only a large number of computations, but also a large amount of memory. Currently, two major categories of memory-hard functions, scrypt and Primecoin mining, exist, but both are imperfect; neither require nearly as much memory as an ideal memory-hard function could require, and both suffer from time-memory tradeoff attacks, where the function can be computed with significantly less memory than intended at the cost of sacrificing some computational efficiency. Ethereum instead uses an algorithm called Dagger, a memory-hard proof of work based on moderately connected directed acyclic graphs (DAGs, hence the name), which, while far from optimal, has much stronger memory-hardness properties than anything else in use today: an estimated 50-500 MB of RAM will be required per thread depending on the choice of parameters.

Dagger is more fully specified here: http://wiki.ethereum.org/index.php/Dagger

Transactions

A transaction is stored as:

[ nonce, receiving_address, value, [ data item 0, data item 1 ... data item n ], v, r, s ]

nonce is the number of transactions already sent by that address, or an empty string if this is the first transaction. (v,r,s) is the raw Electrum-style signature of the transaction without the signature made with the private key corresponding to the sending address, with 27 <= v <= 30. From an Electrum-style signature (65 bytes) it is possible to extract the public key, and thereby the address, directly. Note that all transactions in Ethereum are valid; invalid transactions or transactions with insufficient balance simply have no effect. Transaction fees will be included automatically. If one wishes to voluntarily pay a higher fee, one is always free to do so by constructing a contract which forwards transactions but automatically sends a certain amount or percentage to the miner of the current block.

Difficulty adjustment

Difficulty is adjusted by the formula:

D(genesis_block) = 2^36
D(block) =
 if block.timestamp >= block.parent.timestamp + 42: D(block.parent) - floor(D(block.parent) / 1024)
 else: D(block.parent) + floor(D(block.parent) / 1024)

This stabilizes around a block time of 60 seconds automatically (note that at a block time of 60 seconds, 50% of blocks come within 42 seconds of the previous; 1 - 1/e = 63.22% come within 60 seconds), and adjusts at a maximum rate of about 0.1% per block (~100% per 12 hours).

Transactions sent to the empty string as an address are a special type of transaction, creating a "contract".

Here is where we get to the actually interesting part of the Ethereum protocol. In Ethereum, there are actually two types of entities that can generate and receive transactions: actual people (or bots, as cryptographic protocols cannot distinguish between the two) and contracts. A contract is essentially an automated agent that lives on the Ethereum network, has an Ethereum address and balance, and can send and receive transactions. A contract is "activated" every time someone sends a transaction to it, at which point it runs its code, perhaps modifying its internal state or even sending some transactions, and then shuts down. The "code" for a contract is written in a special-purpose low-level language consisting of a stack, which is not persistent, and 2^256 memory entries, which constitute the contract's permanent state. Note that Ethereum users will not need to code in this low-level stack language; we will provide a simple C-like language with variables, expressions, conditionals and while loops, and provide a compiler down to Ethereum code.

Applications

Here are some examples of what can be done with Ethereum contracts, with all code examples written in a language that will likely be very similar to the C-like high-level language that we will release. The variables tx.sender, tx.value, tx.fee, tx.data and tx.datan are properties of the incoming transaction, contract.memory, contract.address and contract.creator of the contract itself, and block.contract_memory, block.addressbalance, block.number, block.difficulty, block.basefee and block.timestamp properties of the block. block.basefee is the "base fee" which all transaction fees in Ethereum are calculated as a multiple of; for more info see the "fees" section below. All variables expressed as capital letters (eg. A) are constants, to be set by the contract creator when actually releasing the contract. The keywords wei, szabo, finney and ether, representing the denominations of ether, simply act as multipliers of 1, 1 million, etc, respectively.

Sub-currencies

Sub-currencies have many applications, ranging from currencies representing assets such as USD or gold to company stocks and even currencies with only one unit issued to represent collectibles or smart property. Advanced special-purpose financial protocols sitting on top of Ethereum may also wish to organize themselves with an internal currency. Sub-currencies are surprisingly easy to implement in Ethereum; this section describes a fairly simple contract for doing so.

The idea is that if someone wants to send X currency units to address A in currency contract C, they will need to make a transaction of the form (C, 100 * block.basefee, [A, X]), and the contract parses the transaction and adjusts balances accordingly. For a transaction to be valid, it must send 100 times the base fee worth of ether to the contract in order to "feed" the contract (as each computational step after the first 16 for any contract costs the contract a small fee and the contract will stop working if its balance drains to zero).

if tx.value < 100 * block.basefee:
 stop
if contract.memory[1000]:
 from = tx.sender
 to = tx.data[0]
 value = tx.data[1]
 if to <= 1000:
 stop
 if contract.memory[from] < value:
 stop
 contract.memory[from] = contract.memory[from] - value
 contract.memory[to] = contract.memory[to] + value
else:
 contract.memory[mycreator] = 10000000000000000
 contract.memory[1000] = 1

Ethereum sub-currency developers may also wish to add some other more advanced features:

  • Include a mechanism by which people can buy currency units in exchange for ether, perhaps auctioning off a set number of units every day.
  • Allow transaction fees to be paid in the internal currency, and then refund the ether transaction fee to the sender. This solves one major problem that all other "sub-currency" protocols have had to date: the fact that sub-currency users need to maintain a balance of sub-currency units to use and units in the main currency to pay transaction fees in. Here, a new account would need to be "activated" once with ether, but from that point on it would not need to be recharged.
  • Allow for a trust-free decentralized exchange between the currency and ether. Note that trust-free decentralized exchange between any two contracts is theoretically possible in Ethereum even without special support, but special support will allow the process to be done about ten times more cheaply.

Financial derivatives

The underlying key ingredient of a financial derivative is a data feed to provide the price of a particular asset as expressed in another asset (in Ethereum's case, the second asset will usually be ether). There are many ways to implement a data feed; one method, pioneered by the developers of Mastercoin, is to include the data feed in the blockchain. Here is the code:

if tx.value < block.basefee:
 stop
if tx.sender != contract.creator:
 stop
memory[data[0]] = data[1]

Any other contract will then be able to query index I of data store D by using block.contract_memory(D,I). A more advanced way to implement a data feed may be to do it off-chain - have the data feed provider sign all values, require anyone attempting to trigger the contract to include the latest signed data, and then use Ethereum's internal scripting functionality to verify the signature. Pretty much any derivative can be made from this, including leveraged trading, options, and even more advanced constructions like collateralized debt obligations (no bailouts here though, so be mindful of black swan risks).

To show an example, let's make a hedging contract. The basic idea is that the contract is created by party A, who puts up 4000 ether as a deposit. The contract then lies open for any party to accept it by putting in 1000 ether. Say that 1000 ether is worth $25 at the time the contract is made, according to index I of data store D. If party B accepts it, then after 30 days anyone can send a transaction to make the contract process, sending the same dollar value worth of ether (in our example, $25) back to B and the rest to A. B gains the benefit of being completely insulated against currency volatility risk without having to rely on any issuers. The only risk to B is if the value of ether falls by over 80% in 30 days - and even then, if B is online B can simply quickly hop onto another hedging contract. The benefit to A is the implicit 0.2% fee in the contract, and A can hedge against losses by separately holding USD in another location (or, alternatively, A can be an individual who is optimistic about the future of Ethereum and wants to hold ether at 1.25x leverage, in which case the fee may even be in B's favor).

state = contract.memory[1000]
if state == 0:
 if tx.value < 1000 ether:
 stop
 contract.memory[1001] = 998 * block.contract_memory(D,I)
 contract.memory[1002] = block.timestamp
 contract.memory[1003] = tx.sender
else:
 if tx.value < 200 finney:
 stop
 ethervalue = contract.memory[1000] / block.contract_memory(D,I)
 if ethervalue >= 5000 ether:
 send(contract.memory[1003],2000 ether,[])
 else if block.timestamp < contract.memory[1002]:
 send(contract.memory[1003],ethervalue,[])
 send(A,5000 - ethervalue,[])

More advanced financial contracts are also possible; complex multi-clause options (eg. "Anyone, hereinafter referred to as X, can claim this contract by putting in 2 USD before Dec 1. X will have a choice on Dec 4 between receiving 1.95 USD on Dec 29 and the right to choose on Dec 11 between 2.20 EUR on Dec 28 and the right to choose on Dec 18 between 1.20 GBP on Dec 30 and paying 1 EUR and getting 3.20 EUR on Dec 29") can be defined simply by storing a state variable just like the contract above but having more clauses in the code, one clause for each possible state. Note that financial contracts of any form do need to be fully collateralized; the Ethereum network controls no enforcement agency and cannot collect debt.

Identity and Reputation Systems

The earliest alternative cryptocurrency of all, Namecoin, attempted to use a Bitcoin-like blockchain to provide a name registration system, where users can register their names in a public database alongside other data. The major cited use case is for a DNS system, mapping domain names like "bitcoin.org" (or, in Namecoin's case, "bitcoin.bit") to an IP address. Other use cases include email authentication and potentially more advanced reputation systems. Here is a simple contract to provide a Namecoin-like name registration system on Ethereum:

if tx.value < 25 ether:
 stop
if contract.memory[tx.data[0]]:
 stop
contract.memory[tx.data[0]] = contract.memory[tx.data[1]]

One can easily add more complexity to allow users to change mappings, automatically send transactions to the contract and have them forwarded, and even add reputation and web-of-trust systems.

Decentralized Autonomous Organizations

The general concept of a "decentralized autonomous corporation" is that of an entity that has a certain set of shareholders which collect dividends and which, perhpas with a 67% majority, have the right to modify its code. The shareholders would collectively decide on how the corporation should allocate its funds, using either bounties, salaries or even more exotic mechanisms such as an internal currency to reward work, and the funds would be distributed automatically. This essentially replicates the legal trappings of a traditional company but using only cryptographic blockchain technology for enforcement. This is the "corporate" model of a decentralized organization; an alternative, perhaps described as a "decentralized autonomous community", would have all members have an equal share in the decision making and require 67% of existing members to agree to add or remove a member. The requirement that one person can only have one membership would then need to be enforced collectively by the group.

Some "skeleton code" for a DAO might look as follows.

There are three transaction types:

  • [0,k] to register a vote in favor of a code change
  • [1,k,L,v0,v1...vn] to register a code change at code k in favor of setting memory starting from location L to v0, v1 ... vn
  • [2,k] to finalize a given code change

Note that the design relies on the randomness of addresses and hashes for data integrity; the contract will likely get corrupted in some fashion after about 2^128 uses, but that is acceptable since nothing close to that volume of usage will exist in the foreseeable future. 2^255 is used as a magic number to store the total number of members, and a membership is stored with a 1 at the member's address. The last three lines of the contract are there to add C as the first member; from there, it will be C's responsibility to use the democratic code change protocol to add a few other members and code to bootstrap the organization.

k = sha3(tx.data[1])
if tx.data[0] == 0:
 if contract.memory[tx.sender] == 0:
 stop
 if contract.memory[k + tx.sender] == 0:
 contract.memory[k + tx.sender] = 1
 contract.memory[k] += 1
else if tx.data[0] == 1:
 if tx.value <= tx.datan * block.basefee * 200:
 stop
 if contract.memory[k] > 0:
 stop
 i = 3
 while i < tx.datan:
 contract.memory[k + i] = tx.data[i]
 i = i + 1
 contract.memory[k] = 1
 contract.memory[k+1] = tx.datan
 contract.memory[k+2] = tx.data[2]
else if tx.data[0] == 2:
 if contract.memory[k] >= contract.memory[2 ^ 255] * 2 / 3:
 if tx.value <= tx.datan * block.basefee * 200:
 stop
 i = 3
 L = contract.memory[k+1]
 loc = contract.memory[k+2]
 while i < L:
 contract.memory[loc+i-3] = tx.data[i]
 i = i + 1
if contract.memory[2 ^ 255 + 1] == 0:
 contract.memory[2 ^ 255 + 1] = 1
 contract.memory[C] = 1

This implements the "egalitarian" DAO model; one can easily extend it to a shareholder model by also storing how many shares each owner holds and providing a simple way to transfer shares.

Further Applications

1) Crop insurace. One can easily make a financial derivatives contract but using a data feed of the weather instead of any price index. If a farmer in Iowa purchases a derivative that pays out inversely based on the precipitation in Iowa, then if there is a drought the farmer will automatically receive money and if there is enough rain the farmer will be happy because their crops would do well.

2) Generic insurance, relying on one of a specified set of third-party judges to adjudicate claims. The contract can even include a complex appeals process if so desired.

3) A decentrally managed data feed, using proof-of-stake voting to give an average (or more likely, median) of everyone's opinion on the price of a commodity, the weather or any other relevant data

4) An offer to participate in a cryptographically secure, trust-free peer-to-peer bet

5) SatoshiDice. The entire gambling site can essentially be replicated on the blockchain using either peer-to-peer bet functionality or a centralized model.

6) A full-scale on-chain stock market. Prediction markets are also easy to implement as a trivial consequence.

7) An on-chain decentralized marketplace, using the identity and reputation system as a base.

How do contracts work?

A contract making transaction is encoded as follows:

[
 nonce,
 '',
 value,
 [
 data item 0,
 data item 1,
 ...
 ],
 v,
 r,
 s
]

The data items will, in most cases, be script codes (more on this below). Contract creation transaction validation happens as follows:

Deserialize the transaction, and extract its sending address from its signature. Calculate the transaction's fee. Check that the balance of the creator is at least the endowment plus the fee. If not, exit, and if yes pay the fee. Take the last 20 bytes of the hash of the transaction making the contract. If a contract with that address already exists, exit. Otherwise, create the contract at that address Copy data item i to memory slot i for all i in [0 ... n-1] in the contract.

Whenever a transaction is sent to a contract, the contract executes its scripting code. The precise steps that happen when a contract receives a transaction are as follows:

Contract Script Interpretation
The contract's ether balance increases by the amount sent The index pointer is set to zero, and STEPCOUNT = 0 Repeat forever: if the command at the index pointer is STOP, invalid or greater than 255, exit from the loop set MINERFEE = 0, VOIDFEE = 0 set STEPCOUNT <- STEPCOUNT + 1 if STEPCOUNT > 16, set MINERFEE <- MINERFEE + STEPFEE see if the command is LOAD or STORE. If so, set MINERFEE <- MINERFEE + DATAFEE see if the command will fill up a previously zero memory field. If so, set VOIDFEE <- VOIDFEE + MEMORYFEE see if the commend will zero a previously used memory field. If so, set VOIDFEE <- VOIDFEE - MEMORYFEE see if the command is EXTRO or BALANCE. If so, set MINERFEE <- MINERFEE + EXTROFEE see if the command is a crypto operation. If so, set MINERFEE <- MINERFEE + CRYPTOFEE if MINERFEE + VOIDFEE > CONTRACT.BALANCE, HALT and exit from the loop else, subtract MINERFEE + VOIDFEE from the contract's balance and add MINERFEE to a running counter that will be added to the miner's balance once all transactions are parsed. Note that MINERFEE may be negative in some cases, in which case the contract balance would actually increase run the command if the command did not exit with an error, update the index pointer and return to the start of the loop. If the contract did exit with an error, break out of the loop.

The language maintains a stack which is initialized as empty, and all operations either manipulate the stack, perform special operations such as sending transactions and setting memory, or do both. In the following descriptions, S[-1], S[-2], etc represent the topmost, second topmost, etc items on the stack. The individual operations are defined as follows:

  • (00) STOP - halts execution
  • (01) ADD - pops two items and pushes S[-2] + S[-1] mod 2^256.
  • (02) MUL - pops two items and pushes S[-2] * S[-1] mod 2^256
  • (03) SUB - pops two items and pushes S[-2] - S[-1] mod 2^256
  • (04) DIV - pops two items and pushes floor(S[-2] / S[-1]). If S[-1] = 0, halts execution.
  • (05) SDIV - pops two items and pushes floor(S[-2] / S[-1]), but treating values above 2^255 as negative (ie. x -> 2^256 - x). If S[-1] = 0, halts execution.
  • (06) MOD - pops two items and pushes S[-2] mod S[-1]. If S[-1] = 0, halts execution.
  • (07) SMOD - pops two items and pushes S[-2] mod S[-1], but treating values above 2^255 as negative (ie. x -> 2^256 - x). If S[-1] = 0, halts execution.
  • (08) EXP - pops two items and pushes S[-2] ^ S[-1] mod 2^256
  • (09) NEG - pops one item and pushes 2^256 - S[-1]
  • (0a) LT - pops two items and pushes 1 if S[-2] < S[-1] else 0
  • (0b) LE - pops two items and pushes 1 if S[-2] <= S[-1] else 0
  • (0c) GT - pops two items and pushes 1 if S[-2] > S[-1] else 0
  • (0d) GE - pops two items and pushes 1 if S[-2] >= S[-1] else 0
  • (0e) EQ - pops two items and pushes 1 if S[-2] == S[-1] else 0
  • (of) NOT - pops one item and pushes 1 if S[-1] = 0 else 0
  • (10) MYADDRESS - pushes the contract's address as a number
  • (11) TXSENDER - pushes the transaction sender's address as a number
  • (12) TXVALUE - pushes the transaction value
  • (13) TXDATAN - pushes the number of data items
  • (14) TXDATA - pops one item and pushes data item S[-1], or zero if index out of range
  • (15) BLK_PREVHASH - pushes the hash of the previous block (NOT the current one since that's impossible!)
  • (16) BLK_COINBASE - pushes the coinbase of the current block
  • (17) BLK_TIMESTAMP - pushes the timestamp of the current block
  • (18) BLK_NUMBER - pushes the current block number
  • (19) BLK_DIFFICULTY - pushes the difficulty of the current block
  • (1a) BASEFEE - pushes the base fee (x as defined in the fee section below)
  • (20) SHA256 - pops one item and then ceil(S[-1] / 32) further items. Then takes those items as 32-byte strings, concatenates them in top-to-bottom order, taking out the low-order bytes of the bottommost if necessary, and pushes the SHA256 of the resulting string.
  • (21) RIPEMD160 - works just like SHA256 but with the RIPEMD-160 hash
  • (22) ECMUL - pops three items. If (S[-2],S[-1]) are a valid point in secp256k1, including both coordinates being less than P, pushes (S[-2],S[-1]) * S[-3], using (0,0) as the point at infinity. Otherwise, pushes (2^256 - 1, 2^256 - 1). Note that there are no restrictions on S[-3].
  • (23) ECADD - pops four items and pushes (S[-4],S[-3]) + (S[-2],S[-1]) if both points are valid, otherwise (2^256 - 1,2^256 - 1).
  • (24) ECSIGN - pops two items and pushes (v,r,s) as the Electrum-style RFC6979 deterministic signature of message hash S[-1] with private key S[-2] mod N.
  • (25) ECRECOVER - pops four items and pushes (x,y) as the public key from the signature (S[-3],S[-2],S[-1]) of message hash S[-4]. If the signature has invalid v,r,s values (ie. v not in [27,28], r not in [0,P], s not in [0,N]), return (2^256 - 1,2^256 - 1).
  • (26) ECVALID - pops two items and pushes 1 if (S[-2],S[-1]) is a valid secp256k1 point (including (0,0)) else 0
  • (27) SHA3 - works just like SHA256 but with the SHA3 hash, 256 bit version.
  • (30) PUSH - pushes the item in memory at the index pointer + 1, and advances the index pointer by 2.
  • (31) POP - pops one item.
  • (32) DUP - pushes S[-1] to the stack.
  • (33) DUPN - reads item in memory at the index pointer + 1, say N, and pushes S[-N] to the stack. Advances the index pointer by 2. If N = 0, exits with an error instead.
  • (34) SWAP - pops two items and pushes S[-1] then S[-2]
  • (35) SWAPN - reads item in memory at the index pointer + 1, say N, and swaps S[-1] and S[-N] to the stack. Advances the index pointer by 2. If N = 0, exits with an error instead.
  • (36) LOAD - pops one item and pushes the item in memory at index S[-1]
  • (37) STORE - pops two items and sets the item in memory at index S[-1] to S[-2]
  • (40) JMP - pops one item and sets the index pointer to S[-1]
  • (41) JMPI - pops two items and sets the index pointer to S[-2] only if S[-1] is nonzero
  • (42) IND - pushes the index pointer
  • (50) EXTRO - pops two items and pushes memory index S[-2] of contract S[-1]
  • (51) BALANCE - pops one item and pushes balance of that address, or zero if the address is invalid
  • (60) MKTX - pops three items and initializes a transaction to send S[-2] ether to S[-1] with S[-3] data items. Pops that many data items and adds them in order popped as data items to the transaction. Then sends the transaction. Note that if S[-1] = 0 then this creates a new contract.
  • (ff) SUICIDE - pops one item, destroys the contract and clears all memory, sending the entire balance plus the negative fee from clearing memory to the address at S[-1]

Fees

Ethereum will have seven primary fees, of which one applies to transaction senders and six apply to contracts. The fees are:

  • TXFEE (100x) - fee for sending a transaction
  • NEWCONTRACTFEE (100x) - fee for creating a new contract, not including the memory fee for each item in script code
  • STEPFEE (x) - fee for every computational step after than first sixteen in contract execution
  • MEMORYFEE (100x) - fee for adding a new item to a contract's memory, including when first creating a contract. The memory fee is the only fee that is not paid to a miner, and is refunded when memory from a contract is removed.
  • DATAFEE (20x) - fee for accessing or setting a contract's memory from inside that contract
  • EXTROFEE (40x) - fee for accessing memory from another contract inside a contract
  • CRYPTOFEE (20x) - fee for using any of the cryptographic operations

One novel innovation in Ethereum is that the fees will be inversely proportional to the square root of the difficulty; that is, x = floor(10^21 / floor(difficulty ^ 0.5)). The reason why this is done is twofold:

It creates a fully decentralized mechanism for setting transaction fees that ensures that fees do not grow too quickly with the growth of the network. For example, if ether becomes 4x more valuable, then it is expected that network mining power and therefore difficulty will increase by 4x, meaning that ether-denominated transaction fees will decrease by 2x and therefore real-value transaction fees will only increase 2x. If computers get 4x more powerful due to Moore's Law, difficulty will increase 4x and so fees will decrease 2x, reflecting computers' increased ability to handle higher load.

The reason the exponent should be kept below 1 is that (1) Moore's Law may affect computing power faster than the slowest component of the transaction processing pipeline, and (2) if the Ethereum network grows very quickly due to an increased number of computers and not any increase in power to each individual computer, an exponent of 1 may push per-computer load too high.

Conclusion

The Ethereum protocol's design philosophy is in many ways the opposite from that taken by many other cryptocurrencies today. Other cryptocurrencies aim to add complexity and increase the number of "features"; Ethereum, on the other hand, takes features away. The protocol does not "support" multisignature transactions, multiple inputs and outputs, hash codes, lock times or many other features that even Bitcoin provides. Instead, all complexity comes from an all-powerful, Turing-complete assembly language, which can be used to build up literally any feature that is mathematically describable. As a result, we have a cryptocurrency protocol whose codebase is very small, and yet which can do anything that any cryptocurrency will ever be able to do.

References and Further Reading

Colored coins whitepaper: https://docs.google.com/a/buterin.com/document/d/1AnkP_cVZTCMLIzw4DvsW6M8Q2JC0lIzrTLuoWu2z1BE/edit Mastercoin whitepaper: https://github.com/mastercoin-MSC/spec Decentralized autonomous corporations, Bitcoin Magazine: http://bitcoinmagazine.com/7050/bootstrapping-a-decentralized-autonomous-corporation-part-i/ Smart property: https://en.bitcoin.it/wiki/Smart_Property Smart contracts: https://en.bitcoin.it/wiki/Contracts Simplified payment verification: https://en.bitcoin.it/wiki/Scalability#Simplifiedpaymentverification Merkle trees: http://en.wikipedia.org/wiki/Merkle_tree Patricia trees: http://en.wikipedia.org/wiki/Patricia_tree Bitcoin whitepaper: http://bitcoin.org/bitcoin.pdf GHOST: http://www.cs.huji.ac.il/~avivz/pubs/13/btcscalabilityfull.pdf StorJ and Autonomous Agents, Jeff Garzik: http://garzikrants.blogspot.ca/2013/01/storj-and-bitcoin-autonomous-agents.html Ethereum RLP: http://wiki.ethereum.org/index.php/RLP Ethereum Merkle Patricia trees: http://wiki.ethereum.org/index.php/Patricia_Tree Ethereum Dagger: http://wiki.ethereum.org/index.php/Dagger

Stuck: Why Americans Stopped Moving to the Richest States - Derek Thompson - The Atlantic

$
0
0

Comments:"Stuck: Why Americans Stopped Moving to the Richest States - Derek Thompson - The Atlantic"

URL:http://www.theatlantic.com/business/archive/2014/01/stuck-why-americans-stopped-moving-to-the-richest-states/282969/


The long road from "go west, young man" to "stay put, everyone."

Reuters

In 1865, Horace Greeley said "go west, young man," and, for a century and a half, men and women, young and old, were keen to listen. Even into the early 2000s, the sunbelt stretching into the suburban southwest fattened with new housing developments—ultimately, to disastrous effect. But in the last decade, the ambition to go west has been replaced with a lazier notion—to "stay put."

"Americans are moving far less often than in the past, and when they do migrate it is typically no longer from places with low wages to places with higher wages," Tim Noah wrote in Washington Monthly. "Rather, it’s the reverse." Why America lost her wanderlust is not entirely clear—perhaps dual-earner households make long moves less likely; perhaps the Great Recession pinned underwater homeowners on their plots—but those still wandering aren't going to the right cities.

When Greeley suggested a westward move, he wasn't making an argument for sun and gold. He was, above all, suggesting that young people escape from areas with expensive housing:

Washington is not a place to live in. The rents are high, the food is bad, the dust is disgusting, and the morals are deplorable. Go West, young man, go West and grow up with the country.
Well ... plus ca change. Today, the aversion to high rental costs is perhaps the most important driver of national migration. According to Atlas Van Lines' annual survey of household moves, many dense, high-income states are bleeding people, while many poorer states with plentiful land continue to add families. Here is 2013's map, with ORANGE states losing more people and BLUE states gaining...

Americans aren't simply moving to the states with the lowest unemployment (Oregon, Tennessee, and North Carolina all have jobless rates above the national average). More importantly, we aren't moving to states with the best records for low-income families getting ahead. In fact, we're often fleeing the best places for a upwardly mobile middle class.

According to Harvard’s Equality of Opportunity Project, the states with the most upwardly mobile cities include Pennsylvania (with five of the top 12 cities), New York and New Jersey (Albany, Newark, and New York are in the top 30). All three states are seeing net emigration, according to the Atlas map. Five of the 11 worst cities for poor children to move into the top quintile are in Tennessee and North Carolina—two of the few states to see more inbound moves in 2013.

This doesn't make much sense if you envision American families rushing to the most promising metros. It does make sense if you see American families rushing to the most affordable homes.

Some of America's most productive cities for medium- and low-income families—Boston, Honolulu, San Jose, New York—are also the most expensive. This is often due to (or at least, exacerbated by) exclusionary zoning and housing regulations that limit the number of available units, which drives up the price of housing, ensuring that low-income families can't afford to live there. The sad irony is that density is a good predictor of upward mobility, but sunbelt cities with affordable housing often sprawl deep into the exurbs, where families aren't anywhere near the best jobs. The very thing that makes those cities attractive places to get to also makes them bad places to get ahead.

In the map above, every state had fewer inbound moves in 2013 than in 2004, except for two small states with relatively little traffic: Oklahoma and North Dakota. But the problem isn't just that Americans aren't moving as much as they used to. It's that the allure of cheaper housing—famously celebrated by Horace Greeley, himself!—often leads families to cities with the worst social mobility. The instinct to "go west" might doom families to go nowhere, at all.

RealTime Data Compression: Finite State Entropy - A new breed of entropy coder

$
0
0

Comments:"RealTime Data Compression: Finite State Entropy - A new breed of entropy coder"

URL:http://fastcompression.blogspot.fr/2013/12/finite-state-entropy-new-breed-of.html


 In compression theory, the entropy encoding stage is typically the last stage of a compression algorithm, the one where the gains from the model are realized.

The purpose of the entropy stage is to reduce a set of flags/symbol to their optimal space given their probability. To say it simply, if a flag has 50% to be set, you want to encode it using 1 bit. If a symbol has 25% probability to have value X, you want to encode it using 2 bits, and so on.
The optimal size to encode a probability is proven, and known as the Shannon limit. Simply said, you can't beat that limit, you can only get close to it.

A solution to this problem has been worked for decades, starting with Claude Shannon own work, which were efficient but not optimal. The optimal solution was ultimately found by one of Shannon's own pupils, David A. Huffman, almost by chance. His version became immensely popular, not least because he could prove, a few years later, that his construction method was optimal : there was no way to build a better distribution.

Or so it was thought.
There was still a problem with Huffman encoding (and all previous ones) : an hidden assumption is that a symbol must be encoded using an integer number of bits. To say it simply, you can't go lower than 1 bit.
It seems reasonable, but that's not even close to Shannon's limit. An event which has 90% probability to happen for example should be encoded using 0.15 bits. You can't do that using Huffman trees.

A solution to this problem was found almost 30 years later, by Jorma Rissanen, under the name of Arithmetic coder. Explaining how it works is outside of the scope of this blog post, since it's complex and would require a few chapters; I invite you to read the Wikipedia page if you want to learn more about it. For the purpose of this presentation, it's enough to say that Arithmetic encoding, and its little brother Range encoding,  solved the fractional bit issue of Huffman, and with only some minimal losses to complain about due to rounding, get closer to Shannon limit. So close in fact that entropy encoding is, since then, considered a "solved problem".

Which is terrible because it gives the feeling that nothing better can be invented.

Well, there is more to this story. Of course, there is still a little problem with arithmetic encoders : they require arithmetic operations, such as multiplications, and divisions, and strictly defined rounding errors.

This is serious requirement for CPU, especially in the 80's. Moreover, some lawyer-happy companies such as IBM grabbed this opportunity to flood the field with multiple dubious patents on minor implementation details, making clear that anyone trying to use the method would face expensive litigation. Considering this environment, the method was barely used for the next few decades, Huffman remaining the algorithm of choice for the entropy stage.

Even today, with most of the patent issues cleared, modern CPU will still take a hit due to the divisions. Optimized versions can sometimes get away with the division during the encoding stage, but not the decoding stage (with the exception of the Binary arithmetic coding, which is however limited to 0/1 symbols).
As a consequence, arithmetic encoders are quite slower than Huffman ones. For low-end or "retro" CPU, it's simply out of range (no pun intended...).

It's been a long time objective of mine to bring arithmetic-level compression performance to vintage (or retro) CPU. Consider it a challenge. I've tested several variants, for example a mix of Huffman and Binary Arithmetic, which was free of divisions, but alas still needed multiplications, and required more registers to operate, which was overkill for weak CPU.

So I've been reading with a keen eye the ANS theory, from Jarek Duda, which I felt was heading into the same direction. If you are able to fully understand his paper, you are better than me, because quite frankly, most of the wording used in his document is way out of my reach. (Note : Jarek pointed to an update version of his paper, which should be easier to understand). Fortunately, it nonetheless resonated, because I was working on something very similar, and Jarek's document provided the last elements required to make it work.

And here is the result today, the Finite State Entropy coder, which is proposed in a BSD-license package at Github.

In a nutshell, this coder provides the same level of performance as Arithmetic coder, but only requires additions, masks, and shifts.
The speed of this implementation is fairly good, and even on modern high-end CPU, it can prove a valuable replacement to standard Huffman implementations.
Compared to huff0, a pretty fast Huffman entropy coder implementation (used for example within Zhuff), it manages to outperform it on all metrics, including ratio and speed.

Benchmark platform : Core i5-3340M (2.7GHz), Window Seven 64-bits
Benchmarked file : win98-lz4-run
Algorithm Ratio Compression Decompression
FSE       2.684   195 MS/s     265 MS/s
Huff0     2.673   165 MS/s     165 MS/s

Benchmarked file : proba70.bin
Algorithm Ratio Compression Decompression
FSE       6.313   195 MS/s     265 MS/s
Huff0     5.574   190 MS/s     190 MS/s


Benchmarked file : proba90.bin
Algorithm Ratio Compression Decompression
FSE       15.19   195 MS/s     265 MS/s
Huff0     7.176   190 MS/s     190 MS/s

As can be guessed, the higher the compression ratio, the more efficient FSE becomes compared to Huffman, since Huffman can't break the "1 bit per symbol" limit.
FSE speed is also very stable, under all probabilities.
I'm quite please with the result, especially considering that, since the invention of arithmetic coding in the 70's, nothing really new has been brought to this field.

This is still beta stuff, so please consider this first release for testing purposes mostly.

Explaining how and why it works is pretty complex, and will require a few more posts, but bear with me, they will come in this blog.

Hopefully, with Jarek's document and this implementation now published, it will be harder this time for big corporations to confiscate an innovation from the public domain.

BMW's Laser Headlights - BMW Shows Us How its Freakin' Laser Light Show Works


[Wikimedia-l] Announcement Sarah Stierch

My Amazon interview experience | Jay Huang

$
0
0

Comments:"My Amazon interview experience | Jay Huang"

URL:http://www.jayhuang.org/blog/my-amazon-interview-experience/


It all started back when I was still working at SAP. A few colleagues mentioned Amazon was opening up another office in Yaletown. I believe it was in January 2013 or so. I wasn’t very interested at first, but after hearing about it a couple times, I gave it some more thought and decided it wouldn’t hurt. I was going to leave SAP at the end of April, and if Amazon turned out to be a good fit, I just might go there. This was before I interviewed and got offers for Palo Alto, one other company, and the company I worked for from May to July.

There were a few listings on their website, so I applied to a “Web Development Engineer” posting for Vancouver as I felt it was the best match for my skills and experience. I wasn’t actively looking for a job at that point, so I didn’t think much of it and pretty soon, forgot I had even applied. Then, out of the blue, I was contacted by one of Amazon’s recruiters on May 24th 2013 for an interview on the 29th. I seriously considered declining it as I had just moved to my new job for a month, and was certainly not looking to leave (yet).

As many people know, I absolutely hate speaking on the phone. First of all, I’m more of a listener, and when I’m speaking with a stranger for the first time on the phone, that comes across as unenthusiastic or uninterested. Secondly, it forces me to context switch and break my mental train of thought. Whether or not it’s pre-scheduled does not matter; I’m forced to abruptly pause my work and move my attention to something else. As a freelancer, I have the option to cut myself off from virtually any environmental disruptions, and prefer to allocate small time blocks to update or communicate with people/clients. Third and most importantly, it’s synchronous communication. When I have to pick up the phone and speak to someone, not only am I making an expensive context switch, I have to be wary of tone, wording, and other things that cause the other party to misunderstand me. Aside from that, I’m unable to give any issues more in-depth and careful thought, which really defeats the whole purpose of discussing them. But in those recent months I had been looking to make a conscious effort to expose myself to more social/human interaction, so I decided I would give it a try. I had nothing to lose anyways; I was working on stuff I enjoyed and this phone call would have no effect on me other than a bit nervous.

Phone interview:

I took the morning off (and made it up later) to do the phone interview. When I picked up the phone the interviewer introduced himself as a Web Development Engineer from Seattle. Immediately I noticed the Indian accent and became nervous because I realized I already had trouble understanding him. He thanked me profusely for taking the interview (literally 5~6 thank yous), which was quite unexpected but also helped relieve some of my anxiety. Nevertheless, we proceeded with the interview. We went on collabedit and he tested my understanding of some of the key features of Javascript, the design and implementation of a type of web component, a bit about HTTP and servers, understanding and application of CSS, and an algorithmic implementation question.

The hiring team has really enjoyed speaking with you and we would like to schedule a time for you to come to Amazon for in-person interviews!

Although I was able to answer all the questions, I was second guessing myself because I felt that I must have misunderstood something between the foreign accent and the poor phone connection. I thought that was the end of it and wrote it off as a nice experience. Then on June 11th, I got an email from a different recruiter saying that they would like me to go in for on-site interviews. Weird, I thought — almost everyone who I knew that interviewed with Amazon went through 2 or more phone interviews before going on-site. I’m certainly not going to complain about having less hurdles to jump through. Upon reading the email closer, I realize they want me to fly down to Seattle for the interviews. I was perplexed; I thought the position was in Vancouver? It was, they said. But they still wanted me to fly down.

Paid flight, travel, food, and lodging? Okay I guess I’ll take a vacation day and head down. Better not forget my passport! Oh, passport…let me take a quick look. My passport was about to expire in 3 days. I quickly let the recruiter know and started the passport renewal process. After a long 3 weeks, I finally got my new passport. We scheduled the on-site for Monday, July 8th, arriving Sunday around noon.

Sunday comes around and I’ve arrived in Seattle. I didn’t have time to prepare for any of these interviews because of my full-time job, my freelance work, and attending night school. I decided I would walk around and figure out how to walk to the building (they have 7 in the area), and just roam a bit. Returning to the hotel, I headed to sleep early so I would be well rested. Unfortunately, I got no sleep that night. Something about the nice hotel bed or the fact that I was not doing my usual late-night freelance work made me restless.

Interview #1:

My first interview was at 10:15am and a 20 minute walk from the hotel (note that Amazon pays for your interview transportation expenses, which includes your flight + taxi to the interview should you need it). I headed out at 9:35 and arrived 20 minutes early to check-in, sign the non-disclosure form (which means I will not be sharing interview questions in this post; just a broad overview), and sit there to calm my nerves. I wasn’t nervous so much about the prospect of getting an offer or not as much as I was about sitting in a small room with a stranger and writing code on a giant whiteboard. There was a sudden change of the first interviewer, so it took him some time to come get me. Immediately upon sitting down, I was presented with a problem that was an integral part of Amazon’s marketplace websites. It was an algorithmic problem that the team had run into and solved, and on the front-end. I wrote the algorithm in Javascript.

Interview #2:

The next interviewer came to pick me up and go to lunch. He introduced himself as a developer on the team, but was acting temporarily as the manager because the previous one left recently. I didn’t have a specific preference in mind, so he took me to a small local sandwich shop where I ordered chicken ciabatta and he paid with a company credit card. He asked if I was interviewing for a position in Seattle, and seemed confused why I was flown down to Seattle when the position was for Vancouver. I was asked a lot of questions about my previous technical experience, technical challenges I faced, asked me to elaborate on some of the more interesting architectural solutions I’ve implemented and discussed trade-offs between other solutions. We had a pretty good discussion, but responding to his questions left little time to finish lunch, so he gave me the last 10 minutes to eat while he answered a few of my questions.

Interview #3:

This interview did not involve writing code. The interviewer was the manager of major team within Amazon. He gave me an algorithmic question (also very relevant to Amazon), and I came up with a solution fairly quickly, but he noticed I was still deep in thought. I explained that it was the best solution I could come up with, but I was wondering if there could be a better way to do it. We ran over some of the details together, discussed potential trade-offs in a different algorithm, and decided my solution could not be any more efficient. I was also asked a behavioral question specific to Amazon’s business.

Interview #4:

For this interview I had two interviewers; one of them was shadowing as he was fairly new to the company. They tested my understanding how browsers handled various things in HTML/CSS, then gave me a screenshot of a new release about to be pushed live, and asked me to write the HTML/CSS for the whole page.

Interview #5:

Here I ran into my interviewer from the phone interview. He asked me to explain the differences between two implementations in Javascript, and use cases of each one. I also wrote HTML/CSS/JS for a webpage component, with a focus on modular code. He then tested my low level understanding of how browsers handled the DOM, and I had to implement that from scratch. I was pretty tired at this point, and had some problems understanding with his accent, but I think I did okay.

Interview #6:

When I send you the offer on Wednesday…

The last interview was with the lead recruiter from Vancouver (apparently he flew from Vancouver to Seattle to interview me, a candidate that was in Vancouver and had to fly to Seattle for the interviews…). He asked if I was tired after such a long day, and then counted 7 interviews. I only remember 6, but he had the official list so I guess it’s 7. Maybe he was talking about interviewers. He mentioned I had received great feedback and that they were excited to move forward. He then spent the remaining 35~40 minutes drawing on the whiteboard the compensation details of the position, including the signing bonus(es), equity options, performance bonus, how I could choose between more options or cash for the bonus, the benefits of choosing one over the other, the base salary, etc. He said “when I send you the offer on Wednesday, you will see _______” on a couple occasions. Then he walked out with me and explained that the team in Vancouver was very diverse and full of cool people, how it was much like a startup, and that I would love it. He then said “once you accept the offer, we will fly you back down around 3 weeks later to do the training here, because the Vancouver offices are still quite small and we don’t have many hires that week”.

After not hearing back from him, I emailed him to follow up. I didn’t hear back so I emailed him another two times, with no response. It’s been months and he has yet to reply to me either with an offer, or a rejection letter. I guess it didn’t work out after all.

Aside from the lack of response from the lead recruiter and the weird logistics of sending me to Seattle to interview for a position in Vancouver and having all the interviewers puzzled, the interviews were pretty interesting. Each interview during the on-site tested a specific skill/topic required for one to be successful in the position, ensuring that there are no glaring gaps in knowledge. I would say it was one of the best interviews I’ve done for a front-end position in terms of getting a full picture of the candidate’s knowledge and experience.

Join the discussion on HN: https://news.ycombinator.com/item?id=7040382

Friends Newsletter #45

$
0
0

Comments:"Friends Newsletter #45"

URL:https://duck.co/blog/friends-newsletter-45


2 days ago posted by

DuckDuckGo friends, happy new year!

In 2013, over one billion searches were made on DuckDuckGo. Needless to say, it was a great year for us.

We're looking forward to similar greatness in 2014. We have a lot of big things planned for this year that we hope will address a lot of the excellent feedback you have been giving us for some time. So please stay tuned.

We also are continually focused on growing the DuckDuckGo community. Join us at duck.co to help shape your search engine.

Thank you for your continued support!

The DuckDuckGo Staff

everpix/Everpix-Intelligence · GitHub

$
0
0

Comments:"everpix/Everpix-Intelligence · GitHub"

URL:https://github.com/everpix/Everpix-Intelligence


About Everpix

Everpix was started in 2011 with the goal of solving the Photo Mess, an increasingly real pain point in people's life photo collections, through ambitious engineering and user experience. Our startup was angel and VC funded with $2.3M raised over its lifetime.

After 2 years of research and product development, and although having a very enthousiastic user base of early adopters combined with strong PR momentum, we didn't succeed in raising our Series A in the highly competitive VC funding market. Unable to continue operating our business, we had to announce our upcoming shutdown on November 5th, 2013.

High-Level Metrics

At the time of its shutdown announcement, the Everpix platform had 50,000 signed up users (including 7,000 subscribers) with 400 millions photos imported, while generating subscription sales of $40,000 / month during the last 3 months (i.e. enough money to cover variable costs, but not the fixed costs of the business).

The following high-level metrics are from September 2012, when we started selling subscriptions, to October 2013, the last month before our shutdown announcement:

Retained users: users who used the Web, iOS, Mac, Windows Everpix apps or opened a Flashback email.

Complete Dataset

Building a startup is about taking on a challenge and working countless hours on solving it. Most startups do not make it but rarely do they reveal the story behind, leaving their users often frustrated. Because we wanted the Everpix community to understand some of the dynamics in the startup world and why we had to come to such a painful ending, we worked closely with a reporter from The Verge who chronicled our last couple weeks. The resulting article generated extensive coverage and also some healthy discussions around some of our high-level metrics and financials. There was a lot more internal data we wanted to share but it wasn't the right time or place.

With the Everpix shutdown behind us, we had the chance to put together a significant dataset covering our business from fundraising to metrics. We hope this rare and uncensored inside look at the internals of a startup will benefit the startup community.

Here are some example of common startup questions this dataset helps answering:

  • What are investment terms for consecutive convertible notes and an equity seed round? What does the end cap table look like? (see here)
  • How does a Silicon Valley startup spend its raised money during 2 years? (see here)
  • What does a VC pitch deck look like? (see here)
  • What kinds of reasons do VCs give when they pass? (see here)
  • What are the open rate and click rate of transactional and marketing emails? (see here)
  • What web traffic do various news websites generate? (see here and here)
  • What are the conversion rate from product landing page to sign up for new visitors? (see here)
  • How fast do people purchase a subscription after signing up to a freemium service? (see here and here)
  • Which countries have higher suscription rates? (see here and here)

The dataset is organized as follow:

  • Anonymized VC Feedback.md: Unedited feedback from VCs who passed on Everpix
  • External Metrics: Raw metrics retrieved from external systems like Google Analytics or AWS billing
  • Financials.md: High-level financials with fundraising and final P&L
  • Internal Metrics: Raw and computed metrics from our service from photos imported to subscription sales
  • Presentation Slides: The slides used to introduce Everpix to press and investors along with the latest version of our more extensive VC pitch deck
  • Public Feedback: Press articles covering Everpix and user reviews on App Stores
  • Timeline & Numbers.md: Everpix product timeline and numbers

The metrics in the dataset were "frozen" as of November 6th, 2013 (the day following the announcement of Everpix's shutdown) and represent more than 90% of all available Everpix metrics. Only metrics covered by NDAs with partners or metrics exposing identifiable Everpix users information have been omitted.

To maximize reusability, metrics are formatted as CSV files (using UTF-8 text encoding) and with the first row being the column names.

Forcing links to open in new windows: an argument that should have ended 15 years ago – Marco.org

Viewing all 9433 articles
Browse latest View live