Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Why Infinite-Scrolling in Mobile Apps is Destroying Content Consumption - This is Elevator - This is Elevator

$
0
0

Comments:" Why Infinite-Scrolling in Mobile Apps is Destroying Content Consumption - This is Elevator - This is Elevator "

URL:http://www.thisiselevator.com/infinitescroll/


It’s 11PM on a Wednesday night. You are lying in bed, scrolling through Twitter. Or Instagram. Or Facebook. Or Flipboard. You reach the end of the feed, and a loading animation appears. More news loads. More pictures appear. You scroll again. The loading animation appears. More content loads. You continue scrolling. There is no end. There is no completion. The cycle of news, the stream of new content continues unabated. As you sleep, the feed will refill once more, like an enormous lake being refilled by rain. By the next morning, your iPhone will be in your hands once more. Scrolling. The loading circle appears. More content to consume. There is no end. There is no conclusion. And that needs to change. That hollow sense of dissatisfaction needs to be replaced by the sense of achievement which can only be found by finishing something. By reaching the end.

The one great loss which has resulted from the availability of huge volumes of information online is a sense of completeness. When reading a newspaper, it always ends. There is a finite volume of text within that paper that can be read. And that’s key to the reading experience. The knowledge that when you reach the end, you will know all there is to know. When reading a book, you read with absolute confidence that when you turn over to page 500, you will know the answer. You will know how it ends, and you will have finished something.

The emergence of infinite-scrolling apps like Instagram and Facebook have rejected that experience. At first, it felt great. When there is an infinite amount of content available, we can never be bored. When there is always something new to read, or something new to know, our appetite for information can never be left unsatisfied. It’s like walking into an all-you-can-eat buffet. At first it feels great. But three hours in, you feel bloated and over-indulged. After spending hours scrolling through Instagram, Facebook, Twitter or Flipboard our mind feels tired. We feel intellectually bloated, and yet completely unsatisfied. Why? Because there is more out there. What if I’m missing out on something? What if there is some critical piece of knowledge just three flicks of my finger upwards?

This problem of endless dissatisfaction needs to be solved. Mobile app developers need to recognise that a boundary has been crossed. That sometimes, even if not always, people just really want to finish something. Yahoo’s News Digest is attempting to solve this problem, but for a very niche audience. The app distributes the same news content to every user of the app, which is clearly an unsustainable solution. To solve the problem of infinite content, we don’t need to lose personalisation. There is no reason why Facebook, Instagram or Twitter can’t algorithmically determine which, say 20 posts are important to us, and give the option to only view those. There is no reason why a news app can’t collate the top ten stories from our local and international news sources and display just that content.

A finite list of news. An Instagram feed that can be finished. A Twitter feed that ends. Maybe when apps like this are available, at 11PM on a Wednesday night we will be able to feel a sense of completion. There will be no loading animation when we reach the bottom of the feed. We will read the stories we need to read. Know what we need to know. We will finally be satisfied. Or, after all, maybe not.

Follow @thisiselevator on Twitter for more. We’re just getting started.

Note: The cover image is a reference to the 1984 movie The NeverEnding Story.


The Jobless Ph.D. Generation | Demythify

blog.izs.me: A member of our community is missing, help find him

1000x Faster Spelling Correction: Source Code released | FAROO Blog

$
0
0

Comments:"1000x Faster Spelling Correction: Source Code released | FAROO Blog"

URL:http://blog.faroo.com/2012/06/24/1000x-faster-spelling-correction-source-code-released/


In a followup to our recent post 1000x Faster Spelling Correction algorithm we are releasing today a C# implementation of our Symmetric Delete Spelling Correction algorithm as Open Source:

// SymSpell: 1000x faster through Symmetric Delete spelling correction algorithm
//
// The Symmetric Delete spelling correction algorithm reduces the complexity of edit candidate generation and dictionary lookup 
// for a given Damerau-Levenshtein distance. It is three orders of magnitude faster and language independent.
// Opposite to other algorithms only deletes are required, no transposes + replaces + inserts.
// Transposes + replaces + inserts of the input term are transformed into deletes of the dictionary term.
// Replaces and inserts are expensive and language dependent: e.g. Chinese has 70,000 Unicode Han characters!
//
// Copyright (C) 2012 Wolf Garbe, FAROO Limited
// Version: 1.6
// Author: Wolf Garbe <wolf.garbe@faroo.com>
// Maintainer: Wolf Garbe <wolf.garbe@faroo.com>
// URL: http://blog.faroo.com/2012/06/07/improved-edit-distance-based-spelling-correction/
// Description: http://blog.faroo.com/2012/06/07/improved-edit-distance-based-spelling-correction/
//
// License:
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License, 
// version 3.0 (LGPL-3.0) as published by the Free Software Foundation.
// http://www.opensource.org/licenses/LGPL-3.0
//
// Usage: single word + Enter: Display spelling suggestions
// Enter without input: Terminate the program
using System;
using System.Linq;
using System.Text.RegularExpressions;
using System.Collections.Generic;
using System.IO;
using System.Diagnostics;
static class SymSpell
{
 private static int editDistanceMax=2;
 private static int verbose = 0;
 //0: top suggestion
 //1: all suggestions of smallest edit distance 
 //2: all suggestions <= editDistanceMax (slower, no early termination)
 private class dictionaryItem
 {
 public string term = "";
 public List<editItem> suggestions = new List<editItem>();
 public int count = 0;
 public override bool Equals(object obj)
 {
 return Equals(term, ((dictionaryItem)obj).term);
 }
 public override int GetHashCode()
 {
 return term.GetHashCode(); 
 } 
 }
 private class editItem
 {
 public string term = "";
 public int distance = 0;
 public override bool Equals(object obj)
 {
 return Equals(term, ((editItem)obj).term);
 }
 public override int GetHashCode()
 {
 return term.GetHashCode();
 } 
 }
 private class suggestItem
 {
 public string term = "";
 public int distance = 0;
 public int count = 0;
 public override bool Equals(object obj)
 {
 return Equals(term, ((suggestItem)obj).term);
 }
 public override int GetHashCode()
 {
 return term.GetHashCode();
 } 
 }
 private static Dictionary<string, dictionaryItem> dictionary = new Dictionary<string, dictionaryItem>();
 //create a non-unique wordlist from sample text
 //language independent (e.g. works with Chinese characters)
 private static IEnumerable<string> parseWords(string text)
 {
 return Regex.Matches(text.ToLower(), @"[\w-[\d_]]+")
 .Cast<Match>()
 .Select(m => m.Value);
 }
 //for every word there all deletes with an edit distance of 1..editDistanceMax created and added to the dictionary
 //every delete entry has a suggestions list, which points to the original term(s) it was created from
 //The dictionary may be dynamically updated (word frequency and new words) at any time by calling createDictionaryEntry
 private static bool CreateDictionaryEntry(string key, string language)
 {
 bool result = false;
 dictionaryItem value;
 if (dictionary.TryGetValue(language+key, out value))
 {
 //already exists:
 //1. word appears several times
 //2. word1==deletes(word2) 
 value.count++;
 }
 else
 {
 value = new dictionaryItem();
 value.count++;
 dictionary.Add(language+key, value);
 }
 //edits/suggestions are created only once, no matter how often word occurs
 //edits/suggestions are created only as soon as the word occurs in the corpus, 
 //even if the same term existed before in the dictionary as an edit from another word
 if (string.IsNullOrEmpty(value.term))
 {
 result = true;
 value.term = key;
 //create deletes
 foreach (editItem delete in Edits(key, 0, true))
 {
 editItem suggestion = new editItem();
 suggestion.term = key;
 suggestion.distance = delete.distance;
 dictionaryItem value2;
 if (dictionary.TryGetValue(language+delete.term, out value2))
 {
 //already exists:
 //1. word1==deletes(word2) 
 //2. deletes(word1)==deletes(word2) 
 if (!value2.suggestions.Contains(suggestion)) AddLowestDistance(value2.suggestions, suggestion);
 }
 else
 {
 value2 = new dictionaryItem();
 value2.suggestions.Add(suggestion);
 dictionary.Add(language+delete.term, value2);
 }
 }
 }
 return result;
 }
 //create a frequency disctionary from a corpus
 private static void CreateDictionary(string corpus, string language)
 {
 if (!File.Exists(corpus))
 {
 Console.Error.WriteLine("File not found: " + corpus);
 return;
 }
 Console.Write("Creating dictionary ...");
 long wordCount = 0;
 foreach (string key in parseWords(File.ReadAllText(corpus)))
 {
 if (CreateDictionaryEntry(key, language)) wordCount++;
 }
 Console.WriteLine("\rDictionary created: " + wordCount.ToString("N0") + " words, " + dictionary.Count.ToString("N0") + " entries, for edit distance=" + editDistanceMax.ToString());
 }
 //save some time and space
 private static void AddLowestDistance(List<editItem> suggestions, editItem suggestion)
 {
 //remove all existing suggestions of higher distance, if verbose<2
 if ((verbose < 2) && (suggestions.Count > 0) && (suggestions[0].distance > suggestion.distance)) suggestions.Clear();
 //do not add suggestion of higher distance than existing, if verbose<2
 if ((verbose == 2) || (suggestions.Count == 0) || (suggestions[0].distance >= suggestion.distance)) suggestions.Add(suggestion);
 }
 //inexpensive and language independent: only deletes, no transposes + replaces + inserts
 //replaces and inserts are expensive and language dependent (Chinese has 70,000 Unicode Han characters)
 private static List<editItem> Edits(string word, int editDistance, bool recursion)
 {
 editDistance++;
 List<editItem> deletes = new List<editItem>();
 if (word.Length > 1)
 {
 for (int i = 0; i < word.Length; i++)
 {
 editItem delete = new editItem();
 delete.term=word.Remove(i, 1);
 delete.distance=editDistance;
 if (!deletes.Contains(delete))
 {
 deletes.Add(delete);
 //recursion, if maximum edit distance not yet reached
 if (recursion && (editDistance < editDistanceMax)) 
 {
 foreach (editItem edit1 in Edits(delete.term, editDistance,recursion))
 {
 if (!deletes.Contains(edit1)) deletes.Add(edit1); 
 }
 } 
 }
 }
 }
 return deletes;
 }
 private static int TrueDistance(editItem dictionaryOriginal, editItem inputDelete, string inputOriginal)
 {
 //We allow simultaneous edits (deletes) of editDistanceMax on on both the dictionary and the input term. 
 //For replaces and adjacent transposes the resulting edit distance stays <= editDistanceMax.
 //For inserts and deletes the resulting edit distance might exceed editDistanceMax.
 //To prevent suggestions of a higher edit distance, we need to calculate the resulting edit distance, if there are simultaneous edits on both sides.
 //Example: (bank==bnak and bank==bink, but bank!=kanb and bank!=xban and bank!=baxn for editDistanceMaxe=1)
 //Two deletes on each side of a pair makes them all equal, but the first two pairs have edit distance=1, the others edit distance=2.
 if (dictionaryOriginal.term == inputOriginal) return 0; else
 if (dictionaryOriginal.distance == 0) return inputDelete.distance;
 else if (inputDelete.distance == 0) return dictionaryOriginal.distance;
 else return DamerauLevenshteinDistance(dictionaryOriginal.term, inputOriginal);//adjust distance, if both distances>0
 }
 private static List<suggestItem> Lookup(string input, string language, int editDistanceMax)
 {
 List<editItem> candidates = new List<editItem>();
 //add original term
 editItem item = new editItem();
 item.term = input;
 item.distance = 0;
 candidates.Add(item);
 List<suggestItem> suggestions = new List<suggestItem>();
 dictionaryItem value;
 while (candidates.Count>0)
 {
 editItem candidate = candidates[0];
 candidates.RemoveAt(0);
 //save some time
 //early termination
 //suggestion distance=candidate.distance... candidate.distance+editDistanceMax 
 //if canddate distance is already higher than suggestion distance, than there are no better suggestions to be expected
 if ((verbose < 2)&&(suggestions.Count > 0)&&(candidate.distance > suggestions[0].distance)) goto sort;
 if (candidate.distance > editDistanceMax) goto sort; 
 if (dictionary.TryGetValue(language+candidate.term, out value))
 {
 if (!string.IsNullOrEmpty(value.term))
 {
 //correct term
 suggestItem si = new suggestItem();
 si.term = value.term;
 si.count = value.count;
 si.distance = candidate.distance;
 if (!suggestions.Contains(si))
 {
 suggestions.Add(si);
 //early termination
 if ((verbose < 2) && (candidate.distance == 0)) goto sort; 
 }
 }
 //edit term (with suggestions to correct term)
 dictionaryItem value2;
 foreach (editItem suggestion in value.suggestions)
 {
 //save some time 
 //skipping double items early
 if (suggestions.Find(x => x.term == suggestion.term) == null)
 {
 int distance = TrueDistance(suggestion, candidate, input);
 //save some time.
 //remove all existing suggestions of higher distance, if verbose<2
 if ((verbose < 2) && (suggestions.Count > 0) && (suggestions[0].distance > distance)) suggestions.Clear();
 //do not process higher distances than those already found, if verbose<2
 if ((verbose < 2) && (suggestions.Count > 0) && (distance > suggestions[0].distance)) continue;
 if (distance <= editDistanceMax)
 {
 if (dictionary.TryGetValue(language+suggestion.term, out value2))
 {
 suggestItem si = new suggestItem();
 si.term = value2.term;
 si.count = value2.count;
 si.distance = distance;
 suggestions.Add(si);
 }
 }
 }
 }
 }//end foreach
 //add edits 
 if (candidate.distance < editDistanceMax)
 {
 foreach (editItem delete in Edits(candidate.term, candidate.distance,false))
 {
 if (!candidates.Contains(delete)) candidates.Add(delete);
 }
 }
 }//end while
 sort: suggestions = suggestions.OrderBy(c => c.distance).ThenByDescending(c => c.count).ToList();
 if ((verbose == 0)&&(suggestions.Count>1)) return suggestions.GetRange(0, 1); else return suggestions;
 }
 private static void Correct(string input, string language)
 {
 List<suggestItem> suggestions = null;
 /*
 //Benchmark: 1000 x Lookup
 Stopwatch stopWatch = new Stopwatch();
 stopWatch.Start();
 for (int i = 0; i < 1000; i++)
 {
 suggestions = Lookup(input,language,editDistanceMax);
 }
 stopWatch.Stop();
 Console.WriteLine(stopWatch.ElapsedMilliseconds.ToString());
 */
 //check in dictionary for existence and frequency; sort by edit distance, then by word frequency
 suggestions = Lookup(input, language, editDistanceMax);
 //display term and frequency
 foreach (var suggestion in suggestions)
 {
 Console.WriteLine( suggestion.term + " " + suggestion.distance.ToString() + " " + suggestion.count.ToString());
 }
 if (verbose == 2) Console.WriteLine(suggestions.Count.ToString() + " suggestions");
 }
 private static void ReadFromStdIn()
 {
 string word;
 while (!string.IsNullOrEmpty(word = (Console.ReadLine() ?? "").Trim()))
 {
 Correct(word,"en");
 }
 }
 public static void Main(string[] args)
 {
 //e.g. http://norvig.com/big.txt , or any other large text corpus
 CreateDictionary("big.txt","en");
 ReadFromStdIn();
 }
 // Damerau–Levenshtein distance algorithm and code 
 // from http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance
 public static Int32 DamerauLevenshteinDistance(String source, String target)
 {
 Int32 m = source.Length;
 Int32 n = target.Length;
 Int32[,] H = new Int32[m + 2, n + 2];
 Int32 INF = m + n;
 H[0, 0] = INF;
 for (Int32 i = 0; i <= m; i++) { H[i + 1, 1] = i; H[i + 1, 0] = INF; }
 for (Int32 j = 0; j <= n; j++) { H[1, j + 1] = j; H[0, j + 1] = INF; }
 SortedDictionary<Char, Int32> sd = new SortedDictionary<Char, Int32>();
 foreach (Char Letter in (source + target))
 {
 if (!sd.ContainsKey(Letter))
 sd.Add(Letter, 0);
 }
 for (Int32 i = 1; i <= m; i++)
 {
 Int32 DB = 0;
 for (Int32 j = 1; j <= n; j++)
 {
 Int32 i1 = sd[target[j - 1]];
 Int32 j1 = DB;
 if (source[i - 1] == target[j - 1])
 {
 H[i + 1, j + 1] = H[i, j];
 DB = j;
 }
 else
 {
 H[i + 1, j + 1] = Math.Min(H[i, j], Math.Min(H[i + 1, j], H[i, j + 1])) + 1;
 }
 H[i + 1, j + 1] = Math.Min(H[i + 1, j + 1], H[i1, j1] + (i - i1 - 1) + 1 + (j - j1 - 1));
 }
 sd[ source[ i - 1 ]] = i;
 }
 return H[m + 1, n + 1];
 }
}

Updated:
The implementation supports now edit distances of any size (default=2).

Benchmark:
With previous spell checking algorithms the required time explodes with larger edit distances. They try to omit this with early termination when suggestions of smaller edit distances are found.

We did a quick benchmark with 1000 lookups:

TermBest correctionEdit distanceFaroo
ms/1000
Peter Norvig
ms/1000
Factor
marsupilamino correction*>31,772165,025,00093,129
acamodationaccommodation31,874175,622,00093,715
acomodationaccommodation2162348,1912,149
houshouse1711792
househouse0017n/a

*Correct word, but not in dictionary and there are also no corrections within an edit distance of

The speed advantage grows exponentially with the edit distance:
For an edit distance=1 it’s the same order of magnitude,
for an edit distance=2 it’s 3 orders of magnitude faster,
for an edit distance=3 it’s 5 orders of magnitude faster.

This entry was posted in FAROO, Open Source by Wolf. Bookmark the permalink.

Google Groups

DNSCrypt

$
0
0

Comments:"DNSCrypt"

URL:http://www.opendns.com/technology/dnscrypt


Background: The need for a better DNS security

DNS is one of the fundamental building blocks of the Internet.  It's used any time you visit a website, send an email, have an IM conversation or do anything else online.  While OpenDNS has provided world-class security using DNS for years, and OpenDNS is the most secure DNS service available, the underlying DNS protocol has not been secure enough for our comfort.  Many will remember the Kaminsky Vulnerability, which impacted nearly every DNS implementation in the world (though not OpenDNS). 

That said, the class of problems that the Kaminsky Vulnerability related to were a result of some of the underlying foundations of the DNS protocol that are inherently weak  -- particularly in the "last mile."  The "last mile" is the portion of your Internet connection between your computer and your ISP.  DNSCrypt is our way of securing the "last mile" of DNS traffic and resolving (no pun intended) an entire class of serious security concerns with the DNS protocol. As the world’s Internet connectivity becomes increasingly mobile and more and more people are connecting to several different WiFi networks in a single day, the need for a solution is mounting.

There have been numerous examples of tampering, or man-in-the-middle attacks, and snooping of DNS traffic at the last mile and it represents a serious security risk that we've always wanted to fix. Today we can.

Why DNSCrypt is so significant

In the same way the SSL turns HTTP web traffic into HTTPS encrypted Web traffic, DNSCrypt turns regular DNS traffic into encrypted DNS traffic that is secure from eavesdropping and man-in-the-middle attacks.  It doesn't require any changes to domain names or how they work, it simply provides a method for securely encrypting communication between our customers and our DNS servers in our data centers.  We know that claims alone don't work in the security world, however, so we've opened up the source to our DNSCrypt code base and it's available on GitHub.

DNSCrypt has the potential to be the most impactful advancement in Internet security since SSL, significantly improving every single Internet user's online security and privacy.

Note: Looking for malware, botnet and phishing protection for laptops or iOS devices? Check out Umbrella Mobility by OpenDNS.  

Frequently Asked Questions (FAQ):

1. In plain English, what is DNSCrypt?

DNSCrypt is a piece of lightweight software that everyone should use to boost online privacy and security.  It works by encrypting all DNS traffic between the user and OpenDNS, preventing any spying, spoofing or man-in-the-middle attacks.

2. How can I use DNSCrypt today?

DNSCrypt is immediately available as a technology preview.  It should work, shouldn't cause problems, but we're still making iterative changes regularly.  You can download a version for Mac or Windows from the links above.

Tips:
If you have a firewall or other middleware mangling your packets, you should try enabling DNSCrypt with TCP over port 443.  This will make most firewalls think it's HTTPS traffic and leave it alone.

If you prefer reliability over security, enable fallback to insecure DNS.  If you can't reach us, we'll try using your DHCP-assigned or previously configured DNS servers.  This is a security risk though.

3. What about DNSSEC? Does this eliminate the need for DNSSEC?

No. DNSCrypt and DNSSEC are complementary.  DNSSEC does a number of things.  First, it provides authentication. (Is the DNS record I'm getting a response for coming from the owner of the domain name I'm asking about or has it been tampered with?)  Second, DNSSEC provides a chain of trust to help establish confidence that the answers you're getting are verifiable.  But unfortunately, DNSSEC doesn't actually provide encryption for DNS records, even those signed by DNSSEC.  Even if everyone in the world used DNSSEC, the need to encrypt all DNS traffic would not go away. Moreover, DNSSEC today represents a near-zero percentage of overall domain names and an increasingly smaller percentage of DNS records each day as the Internet grows.

That said, DNSSEC and DNSCrypt can work perfectly together.  They aren't conflicting in any way.  Think of DNSCrypt as a wrapper around all DNS traffic and DNSSEC as a way of signing and providing validation for a subset of those records.  There are benefits to DNSSEC that DNSCrypt isn't trying to address. In fact, we hope DNSSEC adoption grows so that people can have more confidence in the entire DNS infrastructure, not just the link between our customers and OpenDNS.

4. Is this using SSL? What's the crypto and what's the design?

We are not using SSL.  While we make the analogy that DNSCrypt is like SSL in that it wraps all DNS traffic with encryption the same way SSL wraps all HTTP traffic, it's not the crypto library being used.  We're using elliptic-curve cryptography, in particular the Curve25519 elliptic curve.  The design goals are similar to those described in the DNSCurve forwarder design.

Self Mallard (4.5.0) released | Self

$
0
0

Comments:"Self Mallard (4.5.0) released | Self"

URL:http://blog.selflanguage.org/2014/01/12/self-mallard-4-5-0-released/


Named after the majestic, awe inspiring Anas platyrhynchos, the latest Self release 4.5.0 is now available for download.

What’s new since 4.4?

  • Build system redone by Tobias Pape. Now based on cmake, with a single modern build process for both Linux and OS X. The VM can be built on both GCC and Clang on the latest vesions of both operating systems.
  • New Self Control.app on OS X to manage your running worlds as a more robust and featured replacement for the older ‘Self Droplet’. Use of this app is optional and you can still access the Self VM through the command line.
  • New look for standard world, with better fonts, colours and greater use of space.
  • Various fixes to the standard world, including a new build script ‘worldBuilder.self’ replacing several ad hoc build scripts.
  • Updated Self Handbook at docs.selflanguage.org

Sources for the VM and for the basic Self world are available as always from the GitHub repository. Although Self, like Smalltalk80, can use an image (in Self called a snapshot of a world), this can be built from text sources.

Self Mallard is available for OS X and Linux. You can download binaries:

  • For OSX, a disk image containing the Self Control.app and a prebuilt clean snapshot (“Clean.snap”). Copy the app to your Applications folder, run it and click the Choose snapshot… button.
  • For Linux, a gripped tar file containing a prebuilt 32-bit binary and Clean.snap. Unpack, then run ./Self -s Clean.snap

Like this:

LikeLoading...

Court Rules That Yelp Must Unmask the Identities of Seven Anonymous Reviewers - Rebecca J. Rosen - The Atlantic

$
0
0

Comments:"Court Rules That Yelp Must Unmask the Identities of Seven Anonymous Reviewers - Rebecca J. Rosen - The Atlantic"

URL:http://www.theatlantic.com/technology/archive/2014/01/court-rules-that-yelp-must-unmask-the-identities-of-seven-anonymous-reviewers/282959/


Just how strong are the legal protections for those posting critical reviews under pseudonyms on the Internet?

Screenshot of the Hadeed Carpet Yelp review page (Yelp.com)

Over the past few years, seven people have been so dissatisfied with the service they received from Hadeed Carpet Cleaning of Alexandria, Virginia, that they took to Yelp to air the details of their dissatisfaction. They, like so many unhappy customers since Yelp launched in 2004, did so under pseudonym.

The right to complain—anonymously or not—is a right that Americans enjoy (and they do seem to enjoy it). But such complaints, in order to receive legal protection, must be factually true. "There is no constitutional value in false statements of fact," the Court has held.

Hadeed Carpet Cleaning believes that those seven unhappy reviewers lied in their Yelp reviews. It's not that the little details of the reviews were wrong, but that they were made up altogether. The seven reviewers were never customers at all, Hadeed Carpet Cleaning claims. If that is indeed the case, then the reviews are false. And if they've additionally caused harm, then the reviews are defamatory. (Though the decision does not explicitly say so, the implication is that false reviews on Yelp may be the pseudonymous, strategic communications of a competing firm.) 

But in order to press that claim, Hadeed Carpet Cleaning would need to know who made those reviews. And so to find out, they subpoenaed Yelp to turn over the identities. Yelp refused, and the case headed to court. Earlier this week, the Court of Appeals of Virginia ruled that Yelp must out its seven anonymous reviewers.

Anonymous speech is a central component of America's First Amendment legacy. The Supreme Court has repeatedly protected the right to speak anonymously, holding in 1960 that, "Anonymous pamphlets, leaflets, brochures and even books have played an important role in the progress of mankind." In 1995 they affirmed that earlier position: "Under our Constitution, anonymous pamphleteering is not a pernicious, fraudulent practice, but an honorable tradition of advocacy and of dissent. Anonymity is a shield from the tyranny of the majority." This right, the Virginia Court of Appeals noted, does not disappear at a website's log-in screen. "The right to free speech is assiduously guarded in all mediums of expression, from the analog to the digital," it held.

That position notwithstanding, the court continued, the right to speak anonymously is not an absolute one: "Defamatory speech is not entitled to constitutional protection."

The court turned to Virginia's state law, which requires, among other things, that the plaintiff need show that the reviews "are or may be tortious or illegal," or that Hadeed Carpet Cleaning has "a legitimate, good faith basis" to believe that they were the victim of actionable conduct. The court held that the lower court's assessment was correct: "If the Doe defendants were not customers of Hadeed, then their Yelp reviews are defamatory." Moreover, the court believed that Hadeed had conducted a sufficient review of its own corporate records to have "a legitimate, good faith basis" for believing the reviewers had invented their claims.

Paul Levy, an attorney at the advocacy group Public Citizen who argued the case for Yelp, says that he believes this is where the court's ruling falters. The standard for a claim of defamation is just too soft, not requiring any showing of falsity or damages.

"They don't say that the substance is false," he told me. "They say, well, we can't be sure this person is a customer. No one with this pseudonym from this city is in our customer database. Well, of course! It's a pseudonym. They haven't shown anything that really would lead any person to believe that this isn't a customer."

Levy and Public citizen believe "If you've been defamed, you ought to be able to [show evidence of your claim of defamation]," Levy says. "And that's both what Hadeed didn't do here—they just refused—they didn't do that here and the court didn't require them to do that." Many other state courts, such as Delaware, Maryland, D.C. (not a state per se, but, alas), Arizona, California, Texas, New Hampshire, and Indiana, have all required supporting evidence in order to unmask the identities of anonymous communicators online.

But the Virginia Court wasn't looking at the same code as those other state codes. It was looking at Virginia state law, whether that state law was constitutional or not. "We are 'reluctant to declare legislative acts unconstitutional, and will do so only when the infirmity is clear, palpable, and practically free from doubt'," the court wrote, quoting from an earlier decision. The court continued, saying this is because "any reasonable doubt 
as to the constitutionality of a statute must be resolved in favor of 
its constitutionality," again quoting from an earlier decision. With those principles in mind, the court "decline[d] to declare [the unmasking statute] unconstitutional."

Instead, that job will next fall to the Virginia Supreme Court, as Yelp has already decided to appeal the decision, Levy told me.

The case illuminates the growing role of courts in mediating anonymous speech. In an earlier time, when anonymous speech happened via paper and ink, there may have been no one for a victim of defamation to subpoena. If you print your diatribes yourself, and plaster them about town, without at least very good detective work, there's no one who could possibly reveal your identity.

That's no longer the case. Online, with the exception of highly sophisticated Internet users who take deliberate steps to cloak their identities using encryption, there are many third parties who know the identities behind anonymous communicators. There are sites such as Yelp, who might have your real email address; there's your Internet service provider. If you wrote your diatribe on your own, nameless website, there's still be a domain name provider and a hosting service. Online, anonymity is spurious; someone out there almost always knows who you are. Each one of these links in the chain could be subpoenaed, and if and when they refuse to comply, courts will have to decide whether the claim is meritorious, or whether the anonymous actor deserves protection. 

That's why the standards that courts choose to employ in making those decisions are so important. Will they, like Virginia did, defer to a claim without a showing of falsity or harm? Or will they require more? How will they balance the very real concerns of those whose good name is at stake, with those who may be vulnerable to retaliation, say an employee who speaks ill of an employer, or a citizen of a police chief?

But that's not necessarily a bad thing. If courts can craft a fair standard, they'll be able to balance the competing interests of free speech and protection from defamation on a case-by-case basis. If the court finds that you should be found, you'll be found.

Levy, for his part, thinks that's right."That's the way it should be," he told me. "Because you shouldn't, and Public Citizen doesn't believe that you should be able to go accuse somebody of wrongful conduct, make a very factual statement, which you know to be false, and get away with it." Courts just need to make sure that is, indeed, the case.


Landon Fuller

$
0
0

Comments:"Landon Fuller"

URL:http://landonf.bikemonkey.org/code/ios/Radar_15800975_iOS_Frameworks.20140112.html


Introduction

When I first documented static frameworks as a partial workaround for the lack of shared libraries and frameworks on iOS, it was 2008.

Nearly six years later, we still don't have a solution on-par with Mac OS X frameworks for distributing libraries, and in my experience, this has introduced unnecessary cost and complexity across the entire ecosystem of iOS development. I decided to sit down and write my concerns as a bug report (rdar://15800975), and realized that I'd nearly written an essay (and that I was overflowing Apple's 3000-character limitations on their broken Radar web UI), and that I may as well actually turn it into an actual blog post.

The lack of a clean library distribution format has had a significant, but not always obvious, affect on the iOS development community and norms. I can't help but wonder whether the responsible parties at Apple -- where internal developers aren't subject to the majority of constraints we are -- realize just how much the lack of a clean library distribution mechanism has impacted not just how we share libraries with each other, but also how we write them.

It's been nearly 7 years since the introduction of iPhoneOS. iOS needs real frameworks, and moreover, iOS needs multiple-platform frameworks, with support for bundling Simulator, Device, and Mac binaries -- along with their resources, headers, and related content -- into a single atomic distribution bundle that applications developers can drag and drop into their projects.

The Problems with Static Libraries

From the perspective of someone that has spent nearly 13 years on Mac OS X and iOS (and various UNIXs and Mac OS before that), there are a litany of obvious costs and inefficiencies caused entirely by the lack of support for proper library distribution on iOS.

The limitations stand out in stark relief when compared to Mac OS X's existing framework support, or even other language environments such as Ruby, Python, Java, or Haskell, all of which when compared to the iOS toolchain, provide more consistent, comprehensive mechanisms for building, distributing, and declaring dependencies on common libraries.

Targeting Simulator and Device

When targeting iOS, anyone distributing binary static libraries has had to adopt complicated workarounds to facilitate both adoption and usage by developers. If you look at all the common static libraries for iOS -- PLCrashReporter included -- they've been manually lipo'd from iOS/ARM and Simulator/x86 binaries to create a single static library to simplify linking. Xcode doesn't support this, requiring complex use of multiple targets (often duplicated for each platform), custom build scripts, and more complex development processes that increase the cognitive load for any other developers that might want to build the project.

On top of this, such binaries are technically invalid; Mach-O Universal binaries only encode architecture, not platform, and were there ever to be an ARM-based Mac, or an x86-based iOS device, these libraries would fail to link, as they conflate architecture (arm/x86) with platform (ios/simulator). Despite all that, we hack up our projects and ship these lipo'd binaries anyway, as the alternative is increasing the integration complexity for every single user of our library.

To make this work, library authors have employed a variety of complex work-arounds, such as using duplicated targets for both iOS and Simulator libraries to allow a single Xcode build to produce a lipo'd binary for both targets, driving xcodebuild via external shell scripts and stitching together the results, and employing a variety of 3rd-party "static framework" target templates that attempt to perform the above.

By comparison, Apple has the benefit of both being able to ship independent SDKs for each platform, and having support for finding and automatically using the appropriate SDK built into Xcode. As such, they're free to ship multiple binaries for each supported platform, and any user can simply pass a linker flag, or add the Apple-supplied libraries or frameworks to the appropriate build phase, and expect them to work.

Library Resources

One of the significant features of frameworks on Mac OS X is the ability to bundleresources. This doesn't just include images, nibs, and other visual data, but also bundled helper tools, XPCServices[1], and additional libraries/frameworks that their framework itself depends on.

On iOS, we can rule out XPC services and helper tools; we're not allowed to spawn subprocesses or bundled XPC services, which while arguably made more sense in the era of the 128MB iPhone 1 than it does now, is a subject for another blog post.

However, that leaves the other resource types -- compiled xibs, images, audio files, textures, etc -- the distribution and use of which winds up being far more difficult than it needs to be. On Mac OS X, we can use great APIs like+[NSBundle bundleForClass:] to automatically find the bundle in which our framework class is defined, and use that bundle to load associated resources. Mac OS X users of our frameworks only have to drop our framework into their project, and all the resources will be available and in an easily found location.

On iOS, however, anyone shipping resources to be bundled with their library has had to adopt work-arounds. External resources are often provided as another, independent bundle that must be placed in their application bundle by end-users, increasing the number of steps required to integrate a simple library. Everyone has to write their own resource location code -- it's just a few lines of code to replace the functionality of +[NSBundle bundleForClass:] as an NSBundle category, but they're a few lines of code that shouldn't need to be written, and certainly not by every author of a library containing resources.

This increase in effort -- both on your users, and on you as an author, leads to questioning whether you really need to ship a resource with your library, even if it would be the best technical solution. It changes the value equation, and as such, library authors deem the effort too complex for a simple use case, and instead do more work to either avoid including the resource, programmatically generate it, or in extreme cases, figure out how to bundle the resource into the static library itself, eg, by inclusion as a byte array in a source file.

Meanwhile, when targeting Mac OS X, we've long-since added the resource to the framework bundle and moved on.

Debugging Info and Consistency

One of the great features of dynamic libraries is consistency. Even if two different applications both ship a duplicate copy of the the same version of a shared library, the actual library itself will be the same.

This brings with it a number of advantages when it comes to supporting a library in the wild -- we can trust that, should an issue arise in the library, we as library authors know exactly what tools were used to build it, we have the original debugging symbols available (and ideally, we supplied them with the binary). This gives us a level of assurance that allows us to provide better support with less effort when and if things go wrong.

However, when using static libraries, that level of consistency and information sharing is lost. In the years past, for example, I saw issues related to a specific linker bug that resulted in improper relocation of Mach-O symbols during final linking of the executable, and crashes that thus only occurred in the specific user's application, and could only be reproduced with a specific set of linker input.

If they attempted to reproduce the linker bug with an isolated test case, it would disappear, as the bug itself was dependent on the original linker input. The only way that I could provide assistance was if they sent me their project, including source code, so that I could debug the linker interactions locally. For obvious reasons, most developers can not send their company's source code to an external developer, and the issue generally disappeared forever if they changed the linker input -- eg, by adding or removing a file.

I was never able to get a reproduction case, and I was never able to reproduce the issue locally. For a few years, I'd receive sporadic bug reports about the linker issue appearing, and then disappearing, until finally some update to the Xcode toolchain seemed to have solved the issue, and -- through no change that I made -- the issue disappeared.

Consistency facilitates reliability.

However, there are advantages beyond reliability to having consistent versions of your framework used across all applications. One of the other advantages is transparency, and specifically, transparency when investigating failure.

When a static library is linked into a binary, all the symbols are relocated, linked, and new debugging information is generated -- assuming debugging information was available in the first place: the default 'release' target for static libraries strips out the DWARF data, and if you're shipping a commercial library, you may not want to expose the deep innards of your software by providing every user of your library with a full set of DWARF debugging info.

Given that, even if multiple applications use the same exact version of a library, each and every application build generates build-specific debug information, and in modern toolchains (eg, via LTO), may in fact generate code that constructively differs from the library as shipped. As a library author, you are entirely reliant on whatever debug information was preserved by the user, and in performing post-facto analysis of a crash, you cannot perform deep analysis of your library's machine code without also having access to the user's own application binary, along with the DWARF debugging information that contains not only the debug info for your library, but also that of the end-user's application.

That all assumes that you, as a library author, ship debugging symbols. If you're providing a commercial library for which debugging info must not be provided, then there is no reasonable way to perform post-facto debugging of your library after it has been statically linked into the final application.

By comparison, dynamic libraries and frameworks maintain consistency -- any DWARF dSYM that is preserved by the library author will apply equally to any integration of that version of the library. Commercial library vendors can provide debugging information as necessary post-crash, and as opposed to the symbol stripping that occurs as a link-time optimization when using static libraries, the public symbols of the dynamic library will be visible to the application developer, allowing them introspection into failures even in the case where no debugging info is supplied.

Missing Shared Library Features

Over the course of the decades after which shared libraries were deployed, a variety of features were introduced that solved very real problems related to hiding implementation details, versioning, and otherwise presenting a clean interface to external consumers of the library.

Dependent Libraries

The most obvious example is linking of dependent libraries. When you add a framework to your project, that framework already has encoded the libraries it depends on; simply drop the framework in, and no further changes are required to the list of libraries your project itself links to.

With static libraries, however, it's the application developer's responsibility to add all required libraries in addition to the one they actually want to use. Some companies have gone so far as producing custom integration applications that walk users through the process of configuring their Xcode project just to provide that same level of ease-of-use that you'd get for free from frameworks. Other library implementors have switched to dlopen()'ing their dependencies just to avoid having to deal with user support around configuring linked libraries -- given a linker error showing only the undefined symbols, it's rarely obvious to an unfamiliar application developer what library they should link to fix it.

Even if users should know how to do this successfully, it remains an totally unnecessary burden to place on every single consumer of a library, and forces library authors to reconsider adding new system framework dependencies to their project -- even if it would be the best technical choice -- as it will break the build of every project that upgrades to a new version with that dependency, requiring additional configuration on behalf of the application author, and additional support from the library developer.

Two-level Namespacing

However useful automatic dependent library linking may be, there are much more significant (and much less easily worked-around) features not provided by static libraries -- such as two-level namespacing.

Two-level namespacing is a somewhat unique Apple platform feature -- instead of the linker recording just an external reference to a symbol name, it instead records both the library from which the symbol was referenced (first level of the namespace) AND the symbol name (second level of the namespace).

This is a huge win for compatibility and avoiding exposing internal implementation details that may break the application or other libraries. For example, if my framework internally depends on a linked copy of sqlite3 that includes custom build-time features (as PLDatabase actually does), and your application links against /usr/lib/libsqlite.dylib, there is no symbol conflict. Your application will resolve SQLite symbols in /usr/lib/libsqlite.dylib, and the library will resolve them in its local copy of libsqlite3.

If you're using static libraries, however, two-level namespacing can't work. Since the static library is included in the main executable, both the static library and the main executable both share the same references to the same external symbols.

Without two-level namespacing, internal library dependencies -- such as libc++/libstdc++ -- are exposed to the enclosing application, causing build breakage, incompatibility between multiple static libraries, incompatibilities with enclosing applications, and depending on the library in question, the introduction of subtle failures or bugs. This requires work-arounds on behalf of the library author -- in libraries such as PLCrashReporter, where a minimal amount of C++ is used internally, this has resulted in our careful avoidance of any features that would require linking the C++ stdlib. This is not an approach that would work for a project making use of more substantial C++ features, and the result is that library authors must either provide two versions of their library, one using libc++, one using libstdc++, or all clients of that library must switch C++ stdlibs in lockstep - even if they neither expose nor rely on C++ APIs externally.

Symbol Visibility

One of the features that is possible to achieve with static libraries is the management of symbol visibility. For example, PLCrashReporter ships with an embedded copy of the protobuf-c library. To avoid conflict with applications and libraries that also use protobuf-c, we rely on symbol visibility to hide the symbols entirely (though, if we had two-level namespaces, we could have avoided the problem in the first place).

To export only a limited set of symbols, we can use linker support for generating MH_OBJECT files from static libraries. This is called "single object pre-link" in Xcode, and uses ld(1)'s '-r' flag. Unfortunately, MH_OBJECT is not used by Xcode's static library templates by default, is seemingly rarely used inside of Apple, and has exhibited a wide variety of bugs. For example, a recent update to the Xcode toolchain introduced a bug in MH_OBJECT linking; when used in combination with -exported_symbols_list, the linker generates an invalid __LINKEDIT vmsize in the resulting Mach-O file (rdar://15042905 -- reported in September 2013).

This highlights a major issue with static libraries: Apple doesn't use them the way we do. Things that aren't used largely aren't tested, with a high tendency towards bitrot and regression; unusual linker flags are no exception.

Conclusion

Above, I've listed a litany of issues that I've seen with actually producing and maintaining static libraries for iOS, and their deficiencies compared to a solution that NeXT was shipping a form of nearly 30 years ago. My list of issues is hardly exhaustive -- I didn't even mention ld's stripping of Objective-C categories, -all_load, and the work-arounds people have employed.

However, all technical issues aside, what's worse are the effects that these technical constraints have had on the culture of code sharing on the platform. The headaches involved in shipping binary libraries has contributed to most people not trying. Instead, we've adopted hacks and workarounds that create both technical debt and greatly constrain the power of tools we can bring to bear on problems. Since static libraries are so painful, instead, people gravitate towards solutions that, while technically sub-optimal, are imminently pragmatic:

  • Drop-in source files that you're expected to include in your project, or
  • Xcode projects that one must include as a subproject, or
  • Shipping xcconfig files that the users must manually include to set the right configuration for a binary and/or subproject library

These workarounds have introduced a number of problems for the development community; the majority of my support requests have nothing to do with actually *using* my libraries. Rather, users get stuck trying to integrate them at all -- as a subproject, or trying to embed the source files, or all of the above.

Developers are regularly frustrated by projects that can't easily be integrated via source code drops, or have complicated sets of targets that are required when attempting to embed a subproject -- but if if it were not for the limitations of iOS library distribution, the internals of our library build systems need not be exposed at all to developers.

For library authors, all of these integration hacks -- embedding source code, reproducing project configuration, hacking projects into use -- result in builds of the library being unique -- not corresponding to a specific known release, they make support and maintenance all the more difficult.

In response to these issues, a variety of cultural shifts have occurred. Tools like CocoaPods automate the extraction of source code from a library project, generate custom Xcode workspaces and project files, and reproduce the build configuration from the library project in the resulting workspace. Through this complex and often fragile mechanism, users can integrate library code in a way that begins to approach the simple atomic integration of a framework, but at the cost of ideally unnecessary user-facing additions to their project's build environment, significant fragility around the process, and not insignificant overhead for library authors themselves.

Outside of CocoaPods, the unnecessary complexity of distribution and packaging libraries for iOS has in no small part resulted in a decrease in the availability of easily integrated and well-designed frameworks. This is no surprise, as a significant, discouraging amount of effort is required to produce something on par with what was easily done with frameworks, and in doing so, one often has to introduce duplicated targets, complex scripts, and other work-arounds that are both time consuming to implement and maintain, and make the project inhospitable to potential contributors.

Apple’s lack of priority on solving the problem of 3rd-party iOS library distribution has taken a real toll on the community’s ability to produce documented, easily integrated, consistently versioned libraries with reasonable effort; most people seemingly don't even try outside of CocoaPods, whereas this was the norm on Mac OS X.

For those of us outside of Apple, limited to only what is permitted by Apple for 3rd-party developers, I believe that this stance has been damaging to the cultural of software craftsmanship by introducing significant discouraging costs and inefficiencies when trying to produce consistent, transparent, easily integrated high-quality libraries.

Nearly 7 years after the introduction of iOS, it well past time for Apple to prioritize closing the gap between iOS and Mac toolchains. A real framework solution is not the only improvement we need to see to the development environment, and certainly not the most important one, but it plays a central role in how we as third-party developers can share and deliver common code.

[1] Technically, only system frameworks can currently embed XPCServices on Mac OS X. Mentioned in rdar://15800873.

Schemer - the beginning of everything worth doing.

How To Find Unadvertised Jobs | Glassdoor Blog

$
0
0

Comments:"How To Find Unadvertised Jobs | Glassdoor Blog"

URL:http://www.glassdoor.com/blog/find-unadvertised-jobs/


Did you know that most jobs are not advertised? Believe me, it’s true. When you see a job posted on (insert leading job board name here) somebody paid to have it advertised there. You know where it’s free for a company to post all of their jobs? Come on, guess… If you suspected a company’s own careers website, you would be correct!

In terms of percentages, it has been estimated that online job board postings represent about 15-20% of the total jobs out there. Click here to do some research and see for yourself. Why such a small percentage? Hey, the cost of advertising those opportunities adds up! Such being the case, you might be wondering how you can have access to all of those gigs. Well, I have three options to present to you.

Option one for finding these unadvertised jobs is to go to the careers section of every company you have an interest in and do a manual search. This could prove to be quite laborious if you have an interest in several companies or you don’t care where you work so long as a check arrives on the 15th and 30th.

Option two for finding hidden opportunities is to spend some quality time on US.jobs. US.jobs is owned and facilitated by “DirectEmployers,” a nonprofit association of employers. What I like about this site is that when you search for a job it searches the career sections of various companies rather than a list of promoted positions a la (insert leading job board name here). As such, you are able to search a richer database of jobs you most likely are not privy too. Now, is EVERY company’s career section available via the US.jobs system? No. However, a great deal of them are and many (if not all) are from Fortune 500 companies. Click here for a list.

Option three is a bit more technical and needs a bit of explaining but is so worth it. Do you know what an applicant tracking system is? Wikipedia defines it as “a software application that enables the electronic handling of recruitment needs.” As a jobseeker, you refer to it as the electronic blackhole that eats up resumes. Specifically, it’s the system you interact with when you apply for a job on a company careers website. One of the more popular applicant tracking systems is produced by a company called “Taleo.”

With a little help from Google, you will be able to search company websites that are using the Taleo system. In this way, you will be able to find jobs that are not posted on (insert leading job board name here) and have an edge on your competition. Let me show you how.

In the Google search below, I am asking Google to look only on the Taleo.net website (where their system hosts various unadvertised jobs that are typically obtainable when a jobseeker does a search on a company’s careers website).  I do this when I search: “site:taleo.net” Afterward, I ask Google to find only those webpages that have “careers” in the title. This is what “intitle:careers” means. Finally, I add in the job title “programmer” because that is the job I am looking for.

Of course, just adding a job title is giving me too many broad results. I narrow it down by adding more keywords like “SAS” and “macro.”

What I like about this is that I now get fewer results but, more in line with the type of job I am seeking. More importantly, I am accessing jobs that my job-seeking competitors may not be privy to yet. Excuse me a moment as I release an evil maniacal laugh of victory. Bwah-ha-ha-ha-ha-ha…

Okay, enough of that. What do you think of these ideas, especially the last option? I look forward to reading your comments.

ArunRocks - Real-time Applications and will Django adapt to it?

$
0
0

Comments:"ArunRocks - Real-time Applications and will Django adapt to it? "

URL:http://arunrocks.com/real-time-applications-and-will-django-adapt-to-it/


While talking about Django at PyCon India this year, the most common question I got was whether it supports Single Page applications or an event driven architecture. I was quick to point out that such realtime applications are usually a handled by separate services at the backend (running on a different port?) and a JavaScript MVC framework on the front. Sometimes, Django supported by a REST library like Django REST or Piston is used as the backend too.

But the more I thought about it, those questions felt more and more important. Then, I had a chance to read about Meteor. I tried some of their tutorials and they were fantastic. I was so impressed at that point that I felt this framework had the power to change web development in a big way. I was even tempted to write an article similar to Why Meteor will kill Ruby on Rails for Django. But then, better sense prevailed and I dug deeper.

So I wrote this post to really understand the Real-time web application phenomena and pen my thoughts around it. It will look at how a traditional web framework like Django might tackle it or whether it really can. It is not a comprehensive analysis by any means but some of my thoughts regarding it.

What is a Real-time Application?

Real-time web applications are what used to be known as AJAX applications or Single-page applications. Except that, now we are not talking about just submitting a form without refreshing anymore. Upon an event, the server is supposed to push data into the browser enabling a two-way communication (thereby, making the terms Server and Client moot). To the Facebook crowd, this means that the application can notify them about events like your friends liking your post or starting a new chat conversation live - as and when it happens.

The application will give you the impression that it is “alive”, in the sense that traditional websites were pretty much stale after the page has loading. To use an analogy, traditional websites are like cooked food that get spoiled with time but real-time websites are like living, breathing organisms - constantly interacting with and responding to the environment. As we will see, both have their own advantages and purposes.

The word Real-time itself comes from the field of Real-time Computing. There are various forms of real-time computing based on how strict the systems perceives the response times. Realtime Web applications are probably the least strict and could be termed as ‘soft realtime’.

What kind of sites needs them?

Practically any kind of website can benefit from realtime updates. Off the top of my head, here are some examples:

  • E-commerce Site: Offers against an item updates its discounted price realtime, user reviews
  • News Site: Minute-by-minute update of an emerging story, errata of an article, reader reactions, polls
  • Investment Site: Stock prices, exchange rates, any investment trigger set by user/customer advisor
  • Utilities Site: Updates your utility usage in real-time, notifies of any upcoming downtimes
  • Micro-blogging: Pretty much everything

Of course, there are several kinds of sites which would not be ideal for the real-time treatment. Sites with relatively static content like blogs or wikis. Imagine if a blog site was designed to constantly poll for post updates and it suddenly becomes viral. This stretches the limits of the server’s scalability for a largely unchanging content. The most optimal approach today would be to pre-compute and cache the article’s contents and serve the comments via Disqus to create an almost complete realtime experience. But, as we would soon see, this could change in the future as the fundamental nature of the web changes.

Websites are Turning into Multiplayer Games

Broadly, our examples have two kinds of real-time updates - from the server itself and from the peers. The former involves notifications like changes in the discounted price or external events like stock price change. But peer updates, from other users, are becoming extremely valuable as user-generated content becomes the basis for many modern sites. Especially for the constant inflow of information which keeps such sites fresh and interesting. In addition, the social factor adds to inherent human tendencies to form groups, to know and interact with each other.

In many ways, this is exactly how a multiplayer game works. There are global events like weather changes or a sudden disaster and there are player generated events like a melee attack or a chat conversation. Games, being some of first programs that I had written; I am quite fond of them and know quite a bit about how they work. Real-time web applications are designed like multiplayer games especially those that were designed to work with a central server rather than say over a LAN. Multiplayer games have been in existence for about three decades now so real-time web applications are able to leverage much of the work that has gone into creating them.

Technically, there are several similarities too. The event loop that every game programmer writes while beginning to write a game is at the heart of most event driven servers like Node.js or Tornado. Typically, the game client and server are written in two different languages for e.g. Eve Online using Stackless Python on the server and possibly C++ with Python scripting on the client side. This is because, like web applications, the server side needs to interact with a database for bookkeeping purposes and would be more IO bound rather than CPU/GPU-bound. Thus, the needs are different and games being extremely performance hungry creations, developers often use the best language or tool or framework for the client and server sides. They often end up being different.

Of course, in the case of web applications, the de facto language was JavaScript. Over the years, several JavaScript APIs exposing the underlying system have emerged which further cemented JavaScript’s position as the client-side language of choice. However with several languages targeting JavaScript and with browsers supporting source-maps, other options are like pyjs and ClojureScript have now emerged.

How does Meteor Solve it?

Meteor and Derby claim to be a new breed of web applications frameworks which are built for the needs of the real-time web. These frameworks are written in JavaScript to eliminate the need to duplicate the logic in the client and server. While using Django or Rails, model declarations and behaviour had to be written in Python/Rails on the sever side and typically rewritten in JavaScript for MVC frameworks like AngularJS or Knockout on the client side, as well. Depending on the amount of logic shared between the client and server, this would become a development and maintenance nightmare.

These new frameworks also allow automatic synchronisation of data. In other words, part or whole of the server information is replicated between the server and all the connected clients. If you have ever programmed a multiplayer game then you would realise how hard it is to maintain a consistent state across all the clients. By automatically synchronising the database information, you have an extremely powerful abstraction for rapidly creating real-time web applications.

A high-level understanding of how real-time web frameworks work

However, like Drew (the creator of Dropbox) pointed out treating anything over the network as local accessible is a “leaky abstraction” due to network latencies. Programmers are very likely to under-engineer the various challenges that networks can bring up like a slower mobile connection or even a server crash. Users hate it when the real-time component stops working. In fact, once they start seeing ‘Connection Lost’ on a simple chat window the worse thing that could happen is that they lose their train of thought. But when an entire site becomes unresponsive, I believe they would start distrusting the site itself. Perhaps, it is critical not to entirely rely on the data synchronisation mechanism for all kinds of functionality.

Regarding the advantages of using the same language to share logic between the client and server, the previous discussion about multiplayer games comes to mind. Often the requirements of a web server are quite different from that of a client. Even if you avoid the Callback Hell with Futures, JavaScript might not be everyone’s first choice for server side programming. Until recently, it didn’t matter which language you used at the server as long as it returned the expected results say HTML, XML or JSON. People can get very attached to their favourite language and unsurprisingly so; considering the large amount of time one needs to spend in mastering every nook and corner of a programming language. Expecting everyone to adopt JavaScript might not be possible.

The payoff is, of course, that the shared data structures and logic will reduce the need to write them twice in two different languages. Unlike multiplayer games, this is a big deal in web programming due to the sheer amount of shared bookkeeping happening at both ends. However, is having JavaScript at both ends the only way out? We can think of at least one possible alternative approach. But before that, we need to look whether we can continue using traditional frameworks.

Can Django Adapt?

Realtime web is a very real challenge that Django faces. There are some elegant solutions like running Django with Gevent. But these solutions look like hacks in the face of a comprehensive solution like Meteor.

Django community is quite aware of this. At last year’s DjangoCon US, the customary “Why I Hate Django” keynote was by Geoff Schmidt, a principal developer of Meteor. He starts off, in an ominous note, by comparing advent of real-time apps as possibly an extinction event for Django similar to the asteriod impact which nearly drove dinosaurs to extinction. Actually, Geoff is a big fan of Django and he tried to adapt it to his needs but was not quite happy with the results.

And he is not alone. Guido, in his Pycon keynote Q&A, mentioned how it would be difficult for traditional web frameworks like Django to completely adapt to an event driven model. He believes that newer frameworks might actually tackle that need better.

Before we answer the question whether Django can adapt, we need to find out what Django’s shortcomings are. Or rather, which parts of the framework are beginning to show its age in the new real-time paradigm.

Templates are no longer necessary

To be exact, HTML templates getting used lesser and lesser. Just send me the meat - seems to be the mantra of the real-time applications. Content is often rendered at the client side so the wire essentially carries data in the form of XML or JSON. Originally when Django was created, all clients couldn’t support client side rendering using JavaScript. But with increasingly powerful mobile clients, the situation is quickly changing.

However this doesn’t mean that templating will no longer be required in frameworks. A case could be made for XML or JSON templates. But Python data structures can be mapped to JSON and back in a straightforward manner (just like JavaScript).

However, the previously mentioned Django solutions like Piston and Django REST does not encourage using serialised Python data structures directly for a good reason - Security. Data coming from the outside world cannot be trusted. You will need to define classes with appropriate data types to perform basic type validation. Essentially, you might end up repeating the model class definitions you already wrote for the front end.

HTTP and the WSGI interface

If you read the above examples closely, you will notice that real-time web works best for sites involving short bursts of data. For sites with long form content like blogs, wikis or even news sites, it is probably best to stick with traditional web frameworks or even static content. Even if you have a very high bandwidth connection, it would simply be too chatty to check for published information (unless you expect to have too many typos)

In fact, the web is specifically suited for the dissemination of long-form content. It works best for a request-reply mechanism for retrieval of documents (or hypertext if you like to be pedantic). It is stateless hence suited for caching, wherever possible. In fact, this explains why hacks like long-polling had to be created for the browser to support bidirectional communication until web sockets arrived.

This explains the design of WSGI, a synchronous protocol to handle request and response. It can handle only one request at a time and hence not ideally suited for creating realtime applications. Since Django is designed to be deployed on a server with a WSGI interface, this means that asynchronous code needs to bypass this mechanism.

This seems like a genuine shortcoming of Django. There might be hacks to work around it like frequent polling etc. But it would be much better if the framework can integrate with the asynchronous model.

Today, writing a REST-based API using Django and interacting with it using a JavaScript MVC library seems to be a popular way of creating a single page application. To make it realtime, you might have to fiddle around with Gevent or Tornado and web sockets.

Can you have the cake and eat it too?

It is possible to look at the language impedance mismatch problem in a different way. Why not have your favourite language, even if it is not JavaScript, run on the server and the client? Since JavaScript is increasingly used as a target language (I am refraining from calling it the ‘assembly language of the web’), why not convert the client part of the code in, say Python, into JavaScript?

If we can have an intelligent Python compiler which can say target byte codes for the server part and the shared code; and then target JavaScript (perhaps in asm.js too for better performance) and the shared code for the client part - then it might just work. Other details will also have to be worked out like event loops at both ends which can pass messages asynchronously, data synchronisation through a compact binary format and data validation. It is a project much bigger than writing a library but it might be a decent solution that is general enough to be applied to most languages.

Conclusion

Django is a good solution for a vast majority of web applications today. But the expectations are rapidly changing to show real-time updates. Realtime web applications need a lot of design changes in existing web frameworks like Django. The existing solutions require a lot of components and sometimes repetitive code. Newer frameworks like Meteor and Derby seem to be well suited for these needs and rapid development. But the design and scalability of real-time application will still be tricky. Finally, if you are a non-JavaScript developer there might be still be hope.

Engineering Managers Should Code 30% of Their Time | Dr Dobb's

$
0
0

Comments:"Engineering Managers Should Code 30% of Their Time | Dr Dobb's"

URL:http://www.drdobbs.com/architecture-and-design/engineering-managers-should-code-30-of-t/240165174


No software engineering manager at a tech company should spend less than 30% of his or her time coding. Whether managing a team, a division, or all of engineering, when managers spend less than 30% of their time coding, they encounter a significant degradation in their ability to execute their responsibilities.

My claim stands in stark contrast to what I see as the expected path for software engineers who become team leaders. At each promotion, the engineer is expected to spend drastically less time coding, with a particularly steep drop-off when they go from a "lead" to a "manager" title. At that point, they're expected to maintain passing familiarity with their codebase. A director's coding is done, if at all, as a hobby.

This happened to me about a year ago as more of my time became absorbed in other things, such as recruiting, managing, and strategizing; I found then that I had crossed a threshold where my effectiveness as a technology leader suffered out of proportion to the amount of coding time lost. I wrote a short post on my blog that presented my thoughts following that experience, but without much concrete detail. Here, I'd like to expand that into a more developed thesis.

Why You Should Keep Coding

Many people believe that managers should step back and concentrate entirely on strategy and management. It makes sense that managers are expected to spend the majority of their time on these things. But as an industry, we pay too high a price for our managers when we allow or demand that they barely spend time coding. Once a person stops coding for a significant portion of time, critical connections to the concerns of developers atrophy. When that happens, decision-making, planning, and leadership suffer, undermining the entire basis for promoting engineers to management positions.

Estimates

One of the most important skills in an engineer's toolkit is estimation. Strategic planning is quite simply not possible without the ability to accurately estimate. And yet we engineers are, as a class, notoriously bad at it. So bad, in fact, that we are advised to just double whatever number we come up with when asked to estimate something. In general, it's easy to fool oneself into thinking that things will go optimally, but if we use the concept of "estimate traction," code appears to have a particularly slippery surface. Because there are so many ways to implement features, when you lose familiarity with the details, your estimates become even more unreliable.

Technical Debt

Another thing that engineering managers need first-hand exposure to is the impact of technical debt. These days, that term has been popularized enough that when you have to debate the priority of a new feature versus. refactoring, you have a good chance of having that debate with people familiar with the concept. Engineering managers need to be more than familiar with the concept — they are the ones directly responsible for making the judgment call as to when technical debt needs to be paid down. A manager who codes regularly will have much better information on when and how to make that decision.

Continuity of Understanding

I haven't chosen the number 30% arbitrarily. I chose it based on my experience because it is a simple heuristic for enough time to keep up with the changes that happen in a codebase under active development. With less time, it's easy to lose the thread of development; and once that thread is dropped, I will need to ramp up all over again to retrieve it, thereby incurring an extra time penalty.

Parity with Responsibility

As a leader, you definitely should not be making all the decisions for your team, or approving all the decisions, but you need the context and the knowledge to facilitate all decisions. In the end, you are responsible for the outcome, and your ability to sensibly make choices should match that responsibility.

Your Team Respects You For Loving Code

Let's be clear: To be successful as a manager, you must facilitate your team members' efforts, foster their development, and ensure they are fulfilled by their work. I've been writing about how to diagnose and repair issues with poor managers on my blog in a series called Debugging the Boss. But to truly excel at managing software engineers, you had better love code. Because your team does, and they will respect you all the more if you lead by example.

Obstacles to Reaching 30%

Despite of my best efforts, I have run into many obstacles trying to maintain my coding time at 30%. These include the following.

Actual Responsibilities: At a startup, there is always more work to do than there is time to do it, and even as a company scales and matures, being as effective as possible is always an exercise in managing competing priorities. An engineering manager has many responsibilities, which should take up 70% of their time. Here are a few:

  • Leadership and Team Tending: This responsibility is the first to appear in an engineering manager's career. You are no longer just responsible for your work, you are responsible for enabling your team to produce their best work. It takes time to mentor your team, resolve disputes, and think about how to optimize their environment for happiness and productivity.
  • Strategy: As the level of responsibility grows, an engineering manager is required to spend more time contributing to strategic planning at various levels. Initially, this will be limited to tech strategy, but later, both product development and competitive strategy will play a large part.
  • Recruiting: Managers, directors, VPs, and CTOs need to build their teams, sometimes quite rapidly. While a good recruiting staff is a help, there is no substitute for a strong leader who actively seeks out new connections and sells every good engineer they meet on how great it is to work on their team.
  • Customers: As engineering managers gain responsibility, they will often become more external-facing. This means they're brought in "pitch meetings" with high-value prospects and called on for firefighting when important customers are unhappy.
  • PR: Senior tech managers devote time to public speaking engagements, writing blog posts, and (of course) articles in prestigious tech journals. No matter how much help you have with these tasks, it takes time to write, edit, rehearse, travel, and present.

Avoidable Time Losses: The responsibilities I just discussed are what an engineering manager should be spending time on. These next areas are pitfalls I've experienced that have been my undoing when trying to maintain a bare minimum of 20% of my time spent coding, and which still stand between me and the 30% I am fighting to return to.

  • Not Saying No Enough: Achieving great things means working hard; however, growth has to be sustainable, and one of the most crucial responsibilities of an engineering manager is to say "no" when their team is over-committed or on the verge of becoming so. When you don't say no, other people begin to dictate your schedule and time commitments.
  • Meetings: An entire cottage industry exists to give advice on how to meet effectively, and justifiably so. I have wasted more time in meetings than any other single activity in my career. This is especially debilitating when you have fallen behind in hiring, and are attending meetings that really should be run by team leaders.

Failing Strategies

In my quest to regain my coding time, I have tried a number of things that have not worked.

  • Sleep Less: While surprisingly alluring for me, sacrificing sleep doesn't work. Your brain stops working and you become unpleasant to be around and much less effective.
  • Read Headers Only: I thought this was promising, but in practice, reading only the headers of C++ code commits gets you very little of the benefit you need for management.
  • Overspecialization: Knowledge of only one project in your overall codebase is appropriate for a team lead, but not a director or above — you need familiarity with everything you are responsible for.
  • Delegating Too Much Too Early: It's easy to make more work for yourself by delegating recklessly when your reports actually need careful mentoring.

Successful Strategies

In spite of numerous dead-ends, I have managed to uncover some successful strategies:

  • Time Blocking: The percentage of time on my calendar that is not allocated weeks in advance is minuscule. It seems obvious in hindsight, but I needed to allocate blocks of time specifically for coding. In practice, these blocks are frequently rebooked, but having even 8 hours blocked out per week makes an enormous difference.
  • Delegating: Delegating is tricky, especially when you have very strong opinions about how tasks should be done and the ability to do them if you had the time. There are many reasons why managers resist delegating, but every reason has to be viewed as a problem to be solved, rather than an insurmountable barrier. Nothing frees up your time for coding like handing off running a meeting to someone you trust.
  • Office Hours: Something I'm planning on instituting in the near future is office hours. This technique should help a lot with the random interruptions by consolidating them into discrete time windows, during which I can work on the many tasks I have that do not require committed, long-term focus.

Final Tips

Here are a few points of practical advice for managers finding themselves approaching, but not getting across, the 30% threshold:

  • Learn how to read code. It's a different skill than writing it.
  • Commit to a meeting structure and hold your organization to it. Do not attend a single meeting that does not have a defined agenda.
  • Get a real machine to work on. That MacBook Air you love for meeting hopping? Not it.
  • Know how to access a dev environment and test a change fast.
  • Understand if you're the kind of person who can use five 20-minute blocks. If you need an hour, then put it on your calendar.
  • Remember, 20–30% is a heuristic I've come up with for myself. Your mileage may vary. So measure yourself (Estimate how long it would take you to try out a hot fix; Can you list the most indebted areas of your codebase? Pick a random code review and see if you understand the conversation and the choices. If you don't, you need to dig in more).
  • Categorize your work by when you can work on it and what you need to accomplish it, rather than by topic. (Advocates of Getting Things Done (GTD) will recognize this as the essential basis of their productivity technique.)
  • Finally, I've lately become fond of getting paper homework. As backwards as it seems, printing out a spec, a list of features to prioritize, or even a block of code is often a nice balance to spending large amounts of time looking at a screen.

I hope these tips are useful. If you have other techniques that have worked for you, please leave them in the comments area.

Eliot Horowitz is the cofounder and CTO of MongoDB.

Net neutrality is half-dead: Court strikes down FCC’s anti-blocking rules | Ars Technica

$
0
0

Comments:"Net neutrality is half-dead: Court strikes down FCC’s anti-blocking rules | Ars Technica"

URL:http://arstechnica.com/tech-policy/2014/01/net-neutrality-is-half-dead-court-strikes-down-fccs-anti-blocking-rules/


The Federal Communication Commission's net neutrality rules were partially struck down today by the US Court of Appeals for the District of Columbia Circuit, which said the Commission did not properly justify its anti-discrimination and anti-blocking rules.

Those rules in the Open Internet Order, adopted in 2010, forbid ISPs from blocking services or charging content providers for access to the network. Verizon challenged the entire order and got a big victory in today's ruling. While it could still be appealed to the Supreme Court, the order today would allow pay-for-prioritization deals that could let Verizon or other ISPs charge companies like Netflix for a faster path to consumers.

The court left part of the Open Internet Order intact, however, saying that the FCC still has "general authority" to regulate how broadband providers treat traffic.

The FCC got itself into trouble with some wishy-washy rulemaking. The commission did not declare that ISPs are "common carriers," yet it imposed restrictions that sound strikingly similar to regulations that can only apply to common carriers.

The 81-page ruling (PDF) today states the following:

[T]he Commission has established that section 706 of the Telecommunications Act of 1996 vests it with affirmative authority to enact measures encouraging the deployment of broadband infrastructure. The Commission, we further hold, has reasonably interpreted section 706 to empower it to promulgate rules governing broadband providers’ treatment of Internet traffic, and its justification for the specific rules at issue here—that they will preserve and facilitate the “virtuous circle" of innovation that has driven the explosive growth of the Internet—is reasonable and supported by substantial evidence. That said, even though the Commission has general authority to regulate in this arena, it may not impose requirements that contravene express statutory mandates. Given that the Commission has chosen to classify broadband providers in a manner that exempts them from treatment as common carriers, the Communications Act expressly prohibits the Commission from nonetheless regulating them as such. Because the Commission has failed to establish that the anti-discrimination and anti-blocking rules do not impose per se common carrier obligations, we vacate those portions of the Open Internet Order.

FCC Chairman Tom Wheeler said the commission might appeal the ruling. “The DC Circuit has correctly held that ‘Section 706 . . . vests [the Commission] with affirmative authority to enact measures encouraging the deployment of broadband infrastructure’ and therefore may ‘promulgate rules governing broadband providers’ treatment of Internet traffic," Wheeler said in a written statement. "I am committed to maintaining our networks as engines for economic growth, test beds for innovative services and products, and channels for all forms of speech protected by the First Amendment. We will consider all available options, including those for appeal, to ensure that these networks on which the Internet depends continue to provide a free and open platform for innovation and expression, and operate in the interest of all Americans.”

Consumer advocacy group Free Press lamented the ruling. “We’re disappointed that the court came to this conclusion," Free Press CEO Craig Aaron said in a written statement. "Its ruling means that Internet users will be pitted against the biggest phone and cable companies—and in the absence of any oversight, these companies can now block and discriminate against their customers’ communications at will."

Aaron further blamed former FCC Chairman Julius Genachowski, who "made a grave mistake when [his Commission] failed to ground its open Internet rules on solid legal footing. Internet users will pay dearly for the previous chairman’s lack of political will."

Consumer advocacy group Public Knowledge offered similar thoughts, while urging the FCC to come up with new rules that meet legal muster. "[T]he Court did uphold broad Commission authority to regulate broadband," Public Knowledge Senior VP Harold Feld said. "To exercise that authority, the FCC must craft open Internet protection that are not full fledged common carrier rules. Alternatively, if the FCC needs broader authority it can classify broadband as a title 2 common carrier service. Both of these are viable options. In fact, Public Knowledge has long held that both broadband is a telecommunications service, and that the modest protections offered by the Open Internet rules fall well short of full common carrier regulations."

Public Knowledge itself could appeal the ruling, Feld said.

We will provide further analysis of this ruling in a followup article today.

The case for an antibiotics tax

$
0
0

Comments:"The case for an antibiotics tax"

URL:http://www.washingtonpost.com/blogs/wonkblog/wp/2014/01/13/the-case-for-an-antibiotics-tax/?wprss=rss_ezra-klein&wpisrc=nl_wonk


The rise of antibiotic-resistant germs is becoming a big problem in the United States. That's why last month  the Food and Drug Administration put out new rules to limit the widespread use of antibiotics in cows, pigs and chickens raised for food. (By some estimates, 80 percent of antibiotics in the United States are used on livestock.) That was followed by a crackdown on antibacterial soap.

Indian Prairie ranch manager Tom Gingerich works with a Texas longhorn raised without antibiotics. (Perry Backus/AP)

But might there be a better way to tackle this problem than blunt regulation? Perhaps. In a recent article in The New England Journal of Medicine, economists Aidan Hollis and Ziana Ahmed suggest that a simple tax or "user fee" on antibiotics used by the livestock industry would be a far more effective way to prevent overuse of these drugs.

It's an elegant argument. But one question is whether this logic should only extend to farms — or should there be a fee for all antibiotics? Let's take a closer look:

The problem with blunt regulations: Let's first recall what the FDA wants to do about antibiotics on farms: The agency is asking the makers of animal drugs to voluntarily alter their labels so that farmers can no longer buy antibiotics to promote animal growth (a fairly common practice). Second, licensed veterinarians will now need to sign off before antibiotics that are commonly used in human medicine can be used on farms.

Critics have outlined all sorts of potential problems with these regulations. Among others, it's still tough to distinguish "valid" uses of antibiotics from "invalid" uses. How do regulators distinguish between a farmer feeding livestock daily low levels of antibiotics to prevent disease and a farmer feeding livestock daily low levels of antibiotics to promote growth? That's hard to enforce, and doing so requires constant rule tweaks and pricey monitoring.

The case for an antibiotics tax: Hollis and Ahmed argue that a simple user fee on antibiotics makes more sense. Their logic goes like this: Every time someone uses antibiotics, they increase the chance that the relevant bacteria will develop resistance to the drug. That's a cost to society that's not currently included in the market price of antibiotics. So one thing to do would be to impose a user fee on these drugs to account for this cost:

Every use of antibiotics increases selective pressure, thus undermining the value for other users. In effect, each antibiotic can have only a limited amount of use, so it is appropriate to charge a fee, just as logging companies pay “stumpage” fees and oil companies pay royalties. (A perfect fee would be calibrated to the extent of antibiotic resistance caused by each use; a practical fee, which is what we propose, would be based on the volume of antibiotics used.)

This approach, they write, would have a number of advantages. Setting a proper fee would ensure that antibiotics are used only when the benefits outweigh the cost to society. "Farms with good substitutes for antibiotics — for example, vaccinations or improved animal-management practices — would be discouraged from using antibiotics by higher prices, whereas farms with a high incidence of infections would probably continue to use antibiotics," the authors write.

Estimated annual antibiotic use in the United States (NEJM)

What's more, the user fee is fairly simple to administer, and the revenue could help fund crucial public research into new antibiotics (or strategies to limit resistance). That's a big deal, since pharmaceutical companies are increasingly reluctant to sink the necessary money —from $800 million to $1.7 billion per drug — into new antibiotics.

The drawbacks? Well, it wouldn't be easy to determine the appropriate price for the user fee. And any such tax would almost certainly raise food prices (although so would regulations: the National Research Council estimated that an outright ban on using antibiotics to promote growth would increase production costs $1.2 billion to $2.5 billion annually).

But, the authors note, these costs are likely to be smaller than the costs of increased antibiotic resistance: "According to our calculations above, a 1% reduction in the usefulness of existing antibiotics could impose costs of $600 billion to $3 trillion in lost human health."

But why limit it to farms? This also raises the question of whether it would make sense to put a fee on other uses of the drugs. After all, it's still not clear how much the use of antibiotics for livestock is actually contributing to the resistance problem. Maybe a lot. Maybe just a bit.

There's plenty of evidence that humans overuse these drugs for medical purposes, too. One recent study in JAMA Internal Medicine found that doctors prescribe antibiotics in 60 percent of all sore throat cases — even though only 10 percent of cases involve strep, the specific condition requiring antibiotics. (Or see Kiera Butler's article on how it's increasingly common for doctors to prescribe antibiotics after a quick phone consultation.)

Indeed, as Maryn McKenna* points out, in 2011 the Infectious Diseases Society of America proposed a user fee for antibiotics — but for all uses of the drugs, not just livestock. (Do note that the group also proposed a slew of other steps, too, from improved monitoring to educating doctors about conservation — the user fee wasn't, by itself, seen as sufficient.)

At the time, advocates were speaking about the issue in terms familiar to those who follow environmental issues: "We need to think of antibiotics as a precious, limited resource, the way we think of forests and fisheries — something we protect and restore,” said one expert. Economists have long argued that pricing — Pigou taxes — are a good way to manage scarce natural resources. Carbon taxes. Royalties on oil drilling. Catch shares for fisheries. Are antibiotics any different?

Further reading: The FDA is cracking down on antibiotics on farms. Here’s what you should know.

* By the way, if you're interested in antibiotic resistance and superbugs, you should really, really be following Maryn McKenna's work.


Lawsuit: Oracle called $50K 'good money for an Indian' | ITworld

$
0
0

Comments:"Lawsuit: Oracle called $50K 'good money for an Indian' | ITworld"

URL:http://www.itworld.com/it-management/399838/ex-oracle-salesman-claims-complaining-about-wage-discrimination-got-him-fired


January 13, 2014, 12:27 PM A former Oracle sales manager is suing the vendor, alleging he was fired shortly after complaining of discriminatory actions by his superior and other company officials.

Ian Spandow was a high-performing sales manager at Oracle in Europe and later California, according to his lawsuit filed last week in U.S. District Court for the Northern District of California. After coming aboard in 2005, he trained more than 1,000 new hires and gave skills coaching to hundreds of others, the suit states.

Spandow was subsequently promoted in January 2008 to the position of coaching manager, and after continued success was promoted to work as a sales manager at Oracle's U.S. headquarters in Redwood Shores, according to the suit.

Despite performing well in his new role, Spandow, who is Irish, "experienced discriminatory and retaliatory conduct based on his national origin and after his complaint of various improper practices, including the company's discriminatory pay practices of employees based on their national origin," the suit states.

In September 2012, Spandow asked for permission to transfer an Oracle employee working in India to California. Spandow wanted to give the employee, who had a good track record, "a compensation level that was equivalent to Caucasian employees hired by Oracle for the same position." But Spandow's manager denied the request and told Spandow to offer the worker a "substantially lower" amount of money, according to the suit.

"I can't in good conscience, even mention $50K/$50 to him," Spandow said of the employee in an email to his supervisor, Ryan Bambling, that was cited in the lawsuit. "It would be nothing short of discriminating against him based on his ethnicity/country of origin. How or what do I have to do/write to get a reasonable (60+) offer to him?'

This prompted a "stern response" and warning to Spandow, the suit claims.

Spandow subsequently raised his concerns with his sales director, Keith Trudeau, who said the lower salary offer would be "good money for an Indian," according to the suit.

An Oracle human resources manager, Melissa Bogers, later insisted to Spandow that the lower offer was fair, the suit adds.

Spandow was "summarily terminated" without warning on Dec. 5, 2012, just weeks after the dispute over the salary offer, according to the suit.

Spandow has suffered "humiliation, embarrassment, mental anguish and severe emotional and physical distress," according to the suit, which also alleges that Oracle has engaged in a pattern of paying Indian employees less than whites.

He is seeking unspecified compensatory and punitive damages, a declaration that Oracle's conduct was unlawful, and "all injunctive relief necessary to bring [Oracle] into compliance" with related laws, according to the suit.

Oracle spokeswoman Deborah Hellinger declined to comment on Spandow's allegations Monday.

Chris Kanaracus covers enterprise software and general technology breaking news for The IDG News Service. Chris' email address is Chris_Kanaracus@idg.com

Stanford Whizzes Develop an Astoundingly Cheap Fix for Clubfoot | Wired Design | Wired.com

$
0
0

Comments:"Stanford Whizzes Develop an Astoundingly Cheap Fix for Clubfoot | Wired Design | Wired.com"

URL:http://www.wired.com/design/2014/01/curing-kids-style-design-thinking/


A pair of Stanford students named Jeff Yang and Ian Connolly have developed a modern treatment for Club Foot that is equally functional, affordable, and attractive. "Photos by" Jeff Yang

A pair of Stanford students named Jeff Yang and Ian Connolly have developed a modern treatment for Club Foot that is equally functional, affordable, and attractive.

"Photos by" Jeff Yang

Clubfoot affects one in every thousand newborns and causes their feet to turn inwards, making it look like they're walking on their ankles. "Photos by" Jeff Yang

Clubfoot affects one in every thousand newborns and causes their feet to turn inwards, making it look like they're walking on their ankles.

"Photos by" Jeff Yang

Surgery can correct the condition in most cases, but post-operative physical therapy involves years of wearing an ugly, uncomfortable, and expensive "orthopedic brace" that consists of a pair of clumsy shoes separated by a steel shank. "Photos by" Jeff Yang

Surgery can correct the condition in most cases, but post-operative physical therapy involves years of wearing an ugly, uncomfortable, and expensive "orthopedic brace" that consists of a pair of clumsy shoes separated by a steel shank.

"Photos by" Jeff Yang

Adding insult to injury, these thoroughly low-tech braces cost more than iPads, generally coming with a $300-700 price tag. "Photos by" Jeff Yang

Adding insult to injury, these thoroughly low-tech braces cost more than iPads, generally coming with a $300-700 price tag.

"Photos by" Jeff Yang

The braces are designed to hold the patient's feet in a specific position that strengthens muscles and helps the feet develop properly. "Photos by" Jeff Yang

The braces are designed to hold the patient's feet in a specific position that strengthens muscles and helps the feet develop properly.

"Photos by" Jeff Yang

The inventive aid was the result of a course on "extreme affordability" at Stanford's D.School, an institution set up by IDEO veterans to inculcate students in design thinking methodologies. "Photos by" Jeff Yang

The inventive aid was the result of a course on "extreme affordability" at Stanford's D.School, an institution set up by IDEO veterans to inculcate students in design thinking methodologies.
"Photos by" Jeff Yang

After talking with physicians, parents, and representatives from Miraclefeet, the designers employed the standard D.School process—uncovering latent user needs through ethnographic fieldwork and bodging together prototypes to solicit feedback from stakeholders, resulting in a design that solves many of the key problems. "Photos by" Jeff Yang

After talking with physicians, parents, and representatives from Miraclefeet, the designers employed the standard D.School process—uncovering latent user needs through ethnographic fieldwork and bodging together prototypes to solicit feedback from stakeholders, resulting in a design that solves many of the key problems.

"Photos by" Jeff Yang

"Many of the lower-end braces are just literally curved aluminum rods," says Yang. "Physicians will bend metal over the end of a chair while talking to the parents." "Photos by" Jeff Yang

"Many of the lower-end braces are just literally curved aluminum rods," says Yang. "Physicians will bend metal over the end of a chair while talking to the parents."

"Photos by" Jeff Yang

Therapeutic guidelines call for very specific adjustments to the posture of the children's feet, but when braces are being fabricated in exam rooms, aids can often be misaligned. Yang and Connolly's injection molded solution brings consistency to the caregiving process. "Photos by" Jeff Yang

Therapeutic guidelines call for very specific adjustments to the posture of the children's feet, but when braces are being fabricated in exam rooms, aids can often be misaligned. Yang and Connolly's injection molded solution brings consistency to the caregiving process.

"Photos by" Jeff Yang

A simple improvement was allowing the shoes to detach from the brace using a custom cleat. This makes it easier for parents to slip the shoe on a crying toddler before attaching them to the apparatus. "Photos by" Jeff Yang

A simple improvement was allowing the shoes to detach from the brace using a custom cleat. This makes it easier for parents to slip the shoe on a crying toddler before attaching them to the apparatus.

"Photos by" Jeff Yang

D.School administrators were approached by an organization called Miraclefeet that endeavors to treat clubfoot in the developing world that was looking for a low-cost product. "Photos by" Jeff Yang

D.School administrators were approached by an organization called Miraclefeet that endeavors to treat clubfoot in the developing world that was looking for a low-cost product.
"Photos by" Jeff Yang

Detachable shoes also give children the dignity of wearing their own footwear rather than being forced to don smelly hand-me-downs that have seen 10 other feet. "Photos by" Jeff Yang

Detachable shoes also give children the dignity of wearing their own footwear rather than being forced to don smelly hand-me-downs that have seen 10 other feet.

"Photos by" Jeff Yang

Yang and Connolly's design does have drawbacks—being a single part design means patients will need a new brace as they grow. "Photos by" Jeff Yang

Yang and Connolly's design does have drawbacks—being a single part design means patients will need a new brace as they grow.

"Photos by" Jeff Yang

Rubber padding on the bottom of the brace makes it feel like a sneaker. "Photos by" Jeff Yang

Rubber padding on the bottom of the brace makes it feel like a sneaker.

"Photos by" Jeff Yang

The result is a brace that costs about $20, potentially making it a disruptive technology in the developing world and a potential game changer in the United States. "Photos by" Jeff Yang

The result is a brace that costs about $20, potentially making it a disruptive technology in the developing world and a potential game changer in the United States.

"Photos by" Jeff Yang

Before this brace becomes the standard of care, it will face challenges. In parts of Brazil, doctors often stay faithful to less effective solutions because they make more money delivering them. "Photos by" Jeff Yang

Before this brace becomes the standard of care, it will face challenges. In parts of Brazil, doctors often stay faithful to less effective solutions because they make more money delivering them.

"Photos by" Jeff Yang

The designers plan to put it before the FDA, which would allow it to be sold in the U.S. The designers hope to get off on the right foot in 2014 and put over 15,000 units into circulation by the end of the year. "Photos by" Jeff Yang

The designers plan to put it before the FDA, which would allow it to be sold in the U.S. The designers hope to get off on the right foot in 2014 and put over 15,000 units into circulation by the end of the year.

"Photos by" Jeff Yang
A pair of Stanford students named Jeff Yang and Ian Connolly have developed a modern treatment for Club Foot that is equally functional, affordable, and attractive. "Photos by" Jeff Yang A pair of Stanford students named Jeff Yang and Ian Connolly have developed a modern treatment for Club Foot that is equally functional, affordable, and attractive. "Photos by" Jeff Yang Clubfoot affects one in every thousand newborns and causes their feet to turn inwards, making it look like they're walking on their ankles. "Photos by" Jeff Yang Clubfoot affects one in every thousand newborns and causes their feet to turn inwards, making it look like they're walking on their ankles. "Photos by" Jeff Yang Surgery can correct the condition in most cases, but post-operative physical therapy involves years of wearing an ugly, uncomfortable, and expensive "orthopedic brace" that consists of a pair of clumsy shoes separated by a steel shank. "Photos by" Jeff Yang Surgery can correct the condition in most cases, but post-operative physical therapy involves years of wearing an ugly, uncomfortable, and expensive "orthopedic brace" that consists of a pair of clumsy shoes separated by a steel shank. "Photos by" Jeff Yang Adding insult to injury, these thoroughly low-tech braces cost more than iPads, generally coming with a $300-700 price tag. "Photos by" Jeff Yang Adding insult to injury, these thoroughly low-tech braces cost more than iPads, generally coming with a $300-700 price tag. "Photos by" Jeff Yang The braces are designed to hold the patient's feet in a specific position that strengthens muscles and helps the feet develop properly. "Photos by" Jeff Yang The braces are designed to hold the patient's feet in a specific position that strengthens muscles and helps the feet develop properly. "Photos by" Jeff Yang The inventive aid was the result of a course on "extreme affordability" at Stanford's D.School, an institution set up by IDEO veterans to inculcate students in design thinking methodologies. "Photos by" Jeff Yang The inventive aid was the result of a course on "extreme affordability" at Stanford's D.School, an institution set up by IDEO veterans to inculcate students in design thinking methodologies. "Photos by" Jeff Yang After talking with physicians, parents, and representatives from Miraclefeet, the designers employed the standard D.School process—uncovering latent user needs through ethnographic fieldwork and bodging together prototypes to solicit feedback from stakeholders, resulting in a design that solves many of the key problems. "Photos by" Jeff Yang After talking with physicians, parents, and representatives from Miraclefeet, the designers employed the standard D.School process—uncovering latent user needs through ethnographic fieldwork and bodging together prototypes to solicit feedback from stakeholders, resulting in a design that solves many of the key problems. "Photos by" Jeff Yang "Many of the lower-end braces are just literally curved aluminum rods," says Yang. "Physicians will bend metal over the end of a chair while talking to the parents." "Photos by" Jeff Yang "Many of the lower-end braces are just literally curved aluminum rods," says Yang. "Physicians will bend metal over the end of a chair while talking to the parents." "Photos by" Jeff Yang Therapeutic guidelines call for very specific adjustments to the posture of the children's feet, but when braces are being fabricated in exam rooms, aids can often be misaligned. Yang and Connolly's injection molded solution brings consistency to the caregiving process. "Photos by" Jeff Yang Therapeutic guidelines call for very specific adjustments to the posture of the children's feet, but when braces are being fabricated in exam rooms, aids can often be misaligned. Yang and Connolly's injection molded solution brings consistency to the caregiving process. "Photos by" Jeff Yang A simple improvement was allowing the shoes to detach from the brace using a custom cleat. This makes it easier for parents to slip the shoe on a crying toddler before attaching them to the apparatus. "Photos by" Jeff Yang A simple improvement was allowing the shoes to detach from the brace using a custom cleat. This makes it easier for parents to slip the shoe on a crying toddler before attaching them to the apparatus. "Photos by" Jeff Yang D.School administrators were approached by an organization called Miraclefeet that endeavors to treat clubfoot in the developing world that was looking for a low-cost product. "Photos by" Jeff Yang D.School administrators were approached by an organization called Miraclefeet that endeavors to treat clubfoot in the developing world that was looking for a low-cost product. "Photos by" Jeff Yang Detachable shoes also give children the dignity of wearing their own footwear rather than being forced to don smelly hand-me-downs that have seen 10 other feet. "Photos by" Jeff Yang Detachable shoes also give children the dignity of wearing their own footwear rather than being forced to don smelly hand-me-downs that have seen 10 other feet. "Photos by" Jeff Yang Yang and Connolly's design does have drawbacks—being a single part design means patients will need a new brace as they grow. "Photos by" Jeff Yang Yang and Connolly's design does have drawbacks—being a single part design means patients will need a new brace as they grow. "Photos by" Jeff Yang Rubber padding on the bottom of the brace makes it feel like a sneaker. "Photos by" Jeff Yang Rubber padding on the bottom of the brace makes it feel like a sneaker. "Photos by" Jeff Yang The result is a brace that costs about $20, potentially making it a disruptive technology in the developing world and a potential game changer in the United States. "Photos by" Jeff Yang The result is a brace that costs about $20, potentially making it a disruptive technology in the developing world and a potential game changer in the United States. "Photos by" Jeff Yang Before this brace becomes the standard of care, it will face challenges. In parts of Brazil, doctors often stay faithful to less effective solutions because they make more money delivering them. "Photos by" Jeff Yang Before this brace becomes the standard of care, it will face challenges. In parts of Brazil, doctors often stay faithful to less effective solutions because they make more money delivering them. "Photos by" Jeff Yang The designers plan to put it before the FDA, which would allow it to be sold in the U.S. The designers hope to get off on the right foot in 2014 and put over 15,000 units into circulation by the end of the year. "Photos by" Jeff Yang The designers plan to put it before the FDA, which would allow it to be sold in the U.S. The designers hope to get off on the right foot in 2014 and put over 15,000 units into circulation by the end of the year. "Photos by" Jeff Yang

In 2014, when you can monitor everything from sleep apnea to blood sugar with an iPhone app, the treatment for a common birth defect called clubfoot looks positively Dickensian. Clubfoot affects one in every thousand newborns and causes their feet to turn inwards, making it look like they’re walking on their ankles. Treatment can correct the condition in most cases, but post-operative physical therapy involves years of wearing an ugly, uncomfortable, and expensive “orthopedic brace” that consists of a pair of clumsy shoes separated by a steel shank.

The braces are designed to hold the patient’s feet in a specific position that strengthens muscles and helps the feet develop properly. While it’s hard to argue with the clinical outcomes, many braces are unstable, making walking nearly impossible, and the kids who wear them look like holdovers from a sad time when affliction by “bad humors” was a plausible medical diagnosis. Adding insult to injury, these thoroughly low-tech braces cost more than iPads, generally coming with a $300-700 price tag.

Fortunately, a pair of Stanford students named Jeff Yang and Ian Connolly have developed a modern solution that is equally functional, but costs $20 and is far less garish. Their colorful, injection molded brace locks the patient’s feet into a therapeutic position while the light plastic frame makes it possible for kids to stand and play on their own. People watching the kids bound around in their braces might even think that the device was a new toy.

The inventive aid was the result of a course on “extreme affordability” at Stanford’s D.School, an institution set up by IDEO veterans to inculcate students in design thinking methodologies. Administrators were approached by an organization called Miraclefeet that endeavors to treat clubfoot in the developing world that was looking for a low-cost product. Yang and Connolly stepped up to take on the challenge of developing a product that would give kids access to top-quality care without making them feel like science experiments.

Yang and Connolly visited Brazil to learn about how children with clubfoot were treated and left shocked by what they found. The tech seemed ancient in the U.S., but the local standard of care was abysmal. “Many of the lower-end braces are just literally curved aluminum rods,” says Yang. “Physicians will bend metal over the end of a chair while talking to the parents.” Unsurprisingly, the kids hated wearing the braces, and when they did, they often fell over due to the poor design.

The brace costs about $20, making it a disruptive technology in the developing world.

After talking with physicians, parents, and representatives from Miraclefeet, the designers employed the standard D.School process—uncovering latent user needs through ethnographic fieldwork and bodging together prototypes to solicit feedback from stakeholders, resulting in a design that solves many of the key problems.

A simple improvement was allowing the shoes to detach from the brace using a custom cleat. This makes it easier for parents to slip the shoe on a crying toddler before attaching them to the apparatus. Detachable shoes also give children the dignity of wearing their own footwear rather than being forced to don smelly hand-me-downs that have seen 10 other feet.

Allowances like a wider base studded with rubber grommets make the brace more stable, giving kids the freedom to stand without assistance and walk around effortlessly. Surprisingly, this was a shocking concept for many of the Brazilian doctors they spoke with, who just assumed kids would lie immobile for 12 hours at a stretch while wearing the footwear.

A pair of Stanford students have developed a attractive, functional, and affordable brace that can help cure club foot for $20. Photo: Jeff Yang

Therapeutic guidelines call for very specific adjustments to the posture of the children’s feet, but when braces are being fabricated in exam rooms, aids can often be misaligned. Yang and Connolly’s injection molded solution brings consistency to the caregiving process. Old fabrication methods left room for error, but the new plastic brace always meets specifications.

Using plastics as a material also allowed the designers to experiment with aesthetics. “A lot of the braces are intimidating looking, cold looking,” says Connolly. “We wanted to develop something highly functional, elegant, but using same visual language as a child’s toy.”

The result is a brace that costs about $20, potentially making it a disruptive technology in the developing world and a potential game changer in the United States.

Before this brace becomes the standard of care, it will face challenges. In parts of Brazil, doctors often stay faithful to less effective solutions because they make more money delivering them. Also, instead of being adjustable like competitive braces, the molded nature of Yang and Connolly’s design means patients will have to buy a new brace as the child grows, adding cost. Despite these drawbacks, the concept has performed well in clinical studies. The designers plan to put it before the FDA, which would allow it to be sold in the U.S. The designers hope to get off on the right foot in 2014 and put over 15,000 units into circulation by the end of the year.

Feeling small: Fingers can detect nano-scale wrinkles even on a seemingly smooth surface

$
0
0

Comments:"Feeling small: Fingers can detect nano-scale wrinkles even on a seemingly smooth surface"

URL:http://www.sciencedaily.com/releases/2013/09/130916110853.htm


Sep. 16, 2013— Our sense of touch is clearly more acute than many realize. A new study by Swedish scientists demystifies the "unknown sense" with first-ever measurements of human tactile perception.

In a ground-breaking study, Swedish scientists have shown that people can detect nano-scale wrinkles while running their fingers upon a seemingly smooth surface. The findings could lead such advances as touch screens for the visually impaired and other products, says one of the researchers from KTH Royal Institute of Technology in Stockholm.

The study marks the first time that scientists have quantified how people feel, in terms of a physical property. One of the authors, Mark Rutland, Professor of Surface Chemistry, says that the human finger can discriminate between surfaces patterned with ridges as small as 13 nanometres in amplitude and non-patterned surfaces.

"This means that, if your finger was the size of the Earth, you could feel the difference between houses from cars," Rutland says. "That is one of the most enjoyable aspects of this research. We discovered that a human being can feel a bump corresponding to the size of a very large molecule."

The research team consisted of Rutland and KTH PhD student Lisa Skedung, and psychologist Birgitta Berglund and PhD student Martin Arvidsson from Stockholm Universiy. Their paper, Feeling Small: Exploring the Tactile Perception Limits, was published on September 12 in Scientific Reports. The research was financed by a grant from Vinnova and the Knowledge Foundation to the SP Technical Research Institute of Sweden. Rutland says that the project will pursue applications of the research together with SP.

The study highlights the importance of surface friction and wrinkle wavelength, or wrinkle width -- in the tactile perception of fine textures.

When a finger is drawn over a surface, vibrations occur in the finger. People feel these vibrations differently on different structures. The friction properties of the surface control how hard we press on the surface as we explore it. A high friction surface requires us to press less to achieve the optimum friction force.

"This is the breakthrough that allows us to design how things feel and are perceived," he says. "It allows, for example, for a certain portion of a touch screen on a smartphone to be designed to feel differently by vibration."

The research could inform the development of the sense of touch in robotics and virtual reality. A plastic touch screen surface could be made to feel like another material, such as fabric or wood, for example. The findings also enable differentiation in product packaging, or in the products themselves. A shampoo, for example, can be designed to change the feel of one's hair.

With the collaboration of National Institute of Standards and Technology (NIST) material science labs, Rutland and his colleagues produced 16 chemically-identical surfaces with wrinkle wavelengths (or wrinkle widths) ranging from 300 nanometres to 90 micrometres, and amplitudes (or wrinkle heights) of between seven nanometres and 4.5 micrometres, as well as two non-patterned surfaces. The participants were presented with random pairs of surfaces and asked to run their dominant index finger across each one in a designated direction, which was perpendicular to the groove, before rating the similarity of the two surfaces.

The smallest pattern that could be distinguished from the non-patterned surface had grooves with a wavelength of 760 nanometres and an amplitude of only 13 nanometres.

Rutland says that by bringing together professors and PhD students from two different disciplines -- surface chemistry and psychology -- the team succeeded in creating "a truly psycho-physical study."

"The important thing is that touch was previously the unknown sense," Rutland says. "To make the analogy with vision, it is as if we have just revealed how we perceive colour.

"Now we can start using this knowledge for tactile aesthetics in the same way that colours and intensity can be combined for visual aesthetics."

Germans abandon hope of US 'no-spy' treaty - The Local

$
0
0

Comments:"Germans abandon hope of US 'no-spy' treaty - The Local"

URL:http://www.thelocal.de/20140114/germany-gives-up-hope-of-no-spy-deal-with-nsa-usa


Photo: DPA

Published: 14 Jan 2014 09:52 GMT+01:00
Updated: 14 Jan 2014 09:52 GMT+01:00

Germany has all but given up hope of securing a "no-spy" treaty with the USA in the wake of the NSA scandal, according to reports on Tuesday.

Although talks between Germany’s security agencies and their American counterparts are officially still ongoing, the German government has little hope of a bilateral treaty which would stop the US spying on the German government, the Süddeutsche Zeitung and broadcaster NDR reported, quoting a high ranking civil servant.

Documents leaked by US security contractor Edward Snowden revealed a mass surveillance programme being run by the US National Security Agency (NSA).

In October it emerged the NSA had been tapping Chancellor Angela Merkel’s phone and allegedly ran a listening station from the US Embassy in Berlin, right in the centre of the German government quarter.

The idea of one its closest allies was apparently spying on it so energetically provoked outrage in Germany - and apologies from the USA.

But talks to reach a “no spy” agreement appear to have stalled.

The Süddeutsche headlined its report: “The Americans have lied to us”.

The report said Washington had not met Berlin’s key demands which included a promise to stop listening to politicians’ phone calls, give German officials access to the alleged listening station in the US Embassy, explain how long Merkel’s phone was monitored, and state whether or not she was the only prominent Germany politician to be targeted.

The civil servant told the paper: “We’re getting nothing.”

The Süddeutsche also said that the head of Germany’s foreign intelligence agency (BND) Gerhard Schindler had told colleagues that he would rather not the sign the deal in its current form. “There is great bitterness," the paper added.

A government spokesman said the talks between the US and Germany were ongoing and they hoped to “get something in the next three months.”

READ MORE: Six best quotes about Germany from Americans 

DPA/The Local (news@thelocal.de)

{add_b_a}

VVVV.js Lab

$
0
0

Comments:"VVVV.js Lab"

URL:http://lab.vvvvjs.com/index.php


Welcome to the VVVV.js Lab,

the place to patch, learn, remix and share VVVV.js. It works kind of like a very simple versioning tool: you can open any of the VVVV.js patches below, alter them, and submit your own version to the gallery.

The coolest thing about that is: you don't have to deploy VVVV.js anywhere yourself to try it. You don't even have to install any additional software, as the VVVV.js editor fully works in your browser.

Enjoy!

Viewing all 9433 articles
Browse latest View live