Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

CBSD - FreeBSD Jail Management Tools

$
0
0

Comments:"CBSD - FreeBSD Jail Management Tools"

URL:http://www.bsdstore.ru/html/about_en.html


About the project

CBSD is a management layer written for the FreeBSDjail(8) subsystem, aimed at unifying racct(8), vnet, zfs(8), carp(4), hastd(8) in one tool and providing a more comprehensive solution for building and deploying applications quickly with pre-defined software sets with minimal configuration.

No extra OS functionality has been exposed yet, and everything that cbsd can do, you could also run manually with tens or hundreds of commands in the CLI using the underlying utilities (Not that you would want to!)

Features:

  • Fast deployment of jails from scratch
  • Import and export to and from images
  • Cloning and migration (including to remote nodes)
  • Snapshots using ZFS
  • Traffic Accounting and Resource Utilisation Information (per-jail)
  • Resource Management: Priorities (re-nice), RACCT/RCTL, File Quoatas
  • Remote Replication
  • Jail Distributions (jail with a certain set of software and services)
  • Web Interface and Centralized Management
  • Why FreeBSD, why jail, why sh, why...?":

    FreeBSD Jails were chosen for several reasons:

  • Zero Virtualization Overhead: Without VIMAGE, the jail code is a very simple design.
  • Security: "divide and conquer". It is desirable for each service or group of services to be isolated.
  • Efficient Environment Replication: Systems engineers often have to deploy lots of similar environments - AMP, MTA, KDE4. At some point you want environments created and configured in advance. Jails allow you to deploy new environments into operation instantly. It also allows the creation of environments that differ from their master template only in configuration, such as different package sets.
  • Speed of deployment and convenient backups.
  • Building your own jail library with customizable options via FreeBSD ports

    Most of the code has been written in sh, since there are no demands for complex logic, it is primarily used to automate what was otherwise manual repetition of commands on the console and is designed to work with external utilities such as: zfs, zpool, sudo, pkg, rsync, etc. Areas that require optimization and specific components such as logtail, replication, node watching daemon are written in C for performance.

    cbsd depends on the following software: rsync,sudo,libssh2,sqlite3

    Feature Detail

  • A ready repository for kernels and world that does not require buildworld/installworld.
  • src.conf support for buildworld/installworld customization
  • Catalog can stored on memory disks, in ram or on tpmfs with a RO mounted base
  • ZFS: Filesystem, Quotas and Snapshot support
  • GUI Configuration of jails (Dialog or Web UI)
  • VIMAGE support
  • Per-jail Traffic Accounting
  • Jail Import/Export
  • Jail Descriptions
  • Cold migration between nodes
  • Custom jail startup sequences and priorities
  • RACCT/RCTL support
  • A repository of ready jail templates
  • Jail Replication
  • Jail Conversion to PXE/ISO/Memstick-image
  • Support for non-native architectures via Qemu User mode (eg: arm or mips64 jail on x86-64 host system)

    Goals

  • Environment deployment automation
  • Convenient management, monitoring and control
  • Creation of application platform with services on demand.
  • Environment (Image) library for rapid provisioning

  • The monkeys in 2013 | JavaScript

    $
    0
    0

    Comments:"The monkeys in 2013 | JavaScript"

    URL:https://blog.mozilla.org/javascript/2014/01/23/the-monkeys-in-2013/


    A monkey. That’s the default name a part in the JavaScript Engine of Mozilla Firefox gets. Even the full engine has its own monkey name, called Spidermonkey. 2013 has been a transformative year for the monkeys. New species have been born and others have gone extinct. I’ll give a small but incomplete overview into the developments that happened.

    Before 2013 JaegerMonkey had established itself as the leader of the pack (i.e. the superior engine in Spidermonkey) and was the default JIT compiler in the engine. It was successfully introduced in Firefox 4.0 on March 22nd, 2011. Its original purpose was to augment the first JIT Monkey, TraceMonkey. Two years later it had kicked TraceMonkey out of the door and was absolute ruler in monkey land. Along the ride it had totally changed. A lot of optimizations had been added, the most important being Type Inference. Though there were also drawbacks. JaegerMonkey wasn’t really designed to host all those optimizations and it was becoming harder and harder to add new flashy things and easier and easier to add faults. JaegerMonkey had always been a worthy monkey but was starting to cripple under age.

    Improvement of Firefox on the octane benchmark

    The year 2013 was only eight days old and together with the release of Firefox 18, a new bad boy was in town, IonMonkey. It had received education from the elder monkeys, as well as from its competitors and inherited the good ideas, while trying to avoid the old mistakes. IonMonkey became a textbook compiler with regular optimization passes only adjusted to work in a JIT environment. I would recommend reading the blogpost of the release for more information about it. Simultaneously, JaegerMonkey was downgraded to a startup JIT to warm up scripts before IonMonkey took over responsibility.

    But that wasn’t the only big change. After the release of IonMonkey in Firefox 18 the year 2013 saw the release of Firefox 19, 20, all the way to number 26. Also Firefox 27, 28 and (partly) 29 were developed in 2013. All those releases brought their own set of performance improvements:

    - Firefox 19 was the second release with IonMonkey enabled. Most work went into improving the new infrastructure of IonMonkey. Another notable improvement was updating Yarr (the engine that executes regular expressions imported from JSC) to the newest release.

    - Firefox 20 saw range analysis, one of the optimization passes of IonMonkey, refactored. It was improved and augmented with symbolic range analysis. Also this was the first release containing JavaScript self-hosting infrastructure that allows standard, builtin functions to be implemented in JavaScript instead of C++. These functions get the same treatment as normal functions, including JIT compilation. This helps a lot with removing the overhead from calling between C++ and JavaScript and even allows builtin JS functions to be inlined in the caller.

    - Firefox 21 is the first release where off-thread compilation for IonMonkey was enabled. This moves most of the compilation to a background thread, so that the main thread can happily continue executing JavaScript code.

    - Firefox 22 saw a big refactoring of how we inline and made it possible to inline a subset of callees at a polymorphic callsite, instead of everything or nothing. A new monkey was also welcomed, called OdinMonkey. OdinMonkey acts as an Ahead of Time compiler optimization pass that reuses most of IonMonkey, kicking in for specific scripts that have been declared to conform to the asm.js subset of JavaScript. OdinMonkey showed immediate progress on the Epic Citadel demo. More recently, Google added an asm.js workload to Octane 2.0 where OdinMonkey provides a nice boost.

    - Firefox 23 brought another first. The first compiler without a monkey name was released: the Baseline Compiler. It was designed from scratch to take over the role of JaegerMonkey. It is the proper startup JIT JaegerMonkey was forced to be when IonMonkey was released. No recompilations or invalidations in the Baseline Compiler: only saving type information and make it easy for IonMonkey to kick in. With this release IonMonkey was also allowed to kick in 10 times earlier. At this point, Type Inference was now only needed for IonMonkey. Consequently, major parts of Type Inference were moved and integrated directly into IonMonkey improving memory usage.

    - Firefox 24 added lazy bytecode generation. One of the first steps in JS execution is parsing the functions in a script and creating bytecode for them. (The whole engine consumes bytecodes instead of a raw JavaScript string.) With the use of big libraries, a lot of functions aren’t used and therefore creating bytecode for all these functions adds unnecessary overhead. Lazy bytecode generation allow us to wait until the first execution before parsing a function and avoids parsing functions that are never executed.

    - Firefox 25 to Firefox 28: No real big performance improvements that stand out. A lot of smaller changes under the hood have landed. Goal: polishing existing features or implementing small improvements. A lot of preparation work went into Exact Rooting. This is needed for more advanced garbage collection algorithms, like Generational GC. Also a lot of DOM improvements were added.

    - Firefox 29. Just before 2014 Off-thread MIR Construction landed. Now the whole compilation process in IonMonkey can be run off the main thread. No delays during execution due to compiling if you have two or more processors anymore.

    Improvement of Firefox on the Dromaeo benchmark

    All these things resulted in improved JavaScript speed. Our score on Octane v1.0 has increased considerably compared to the start of the year. We now are again competitive on the benchmark. Towards the end of the year, Octane v2.0 was released and we took a small hit, but we were very efficient in finding the opportunities to improve our engine and our score on Octane v2.0 has almost surpassed our Octane v1.0 score. Another example on how the speed of Spidermonkey has increased a lot is the score on the Web Browser Grand Prix on Tom’s Hardware. In those reports, Chrome, Firefox, Internet Explorer and Opera are tested on multiple benchmarks, including Browsermark, Peacekeeper, Dromaeo and a dozen others. During 2013, Firefox was in a steady second place behind Chrome. Unexpectedly, the hard work brought us to the first place on the Web Browser Grand Prix of June 30th.  Firefox 22 was crowned above Chrome and Opera Next. More importantly than all these benchmarks are the reports we get about overall improved JavaScript performance, which is very encouraging.

    A new year starts and improving performance is never finished. In 2014 we will try to improve the JavaScript engine even more. The usual fixes and adding of fast paths continues, but also the higher-level work continues. One of the biggest changes we will welcome this year is the landing of Generational GC. This should bring big benefits in reducing the long GC pauses and improving heap usage. This has been an enormous task, but we are close to landing. Other expected boosts include improving DOM access even more, possibly a lightweight way to do chunked compilation in the form of loop compilation, different optimization levels for scripts with different hotness, adding a new optimization pass called escape analysis … and possibly much more.

    A happy new year from the JavaScript team!

    Use Subqueries to Count Distinct 50X Faster

    $
    0
    0

    Comments:"Use Subqueries to Count Distinct 50X Faster"

    URL:https://periscope.io/blog/use-subqueries-to-count-distinct-50x-faster.html


    Use Subqueries to Count Distinct 50X Faster

    22 Jan 2014

    NB: These techniques are universal, but for syntax we chose Postgres. Thanks to the inimitable pgAdminIII for the Explain graphics.

    So Useful, Yet So Slow

    Count distinct is the bane of SQL analysts, so it was an obvious choice for our first blog post.

    First things first: If you have a huge dataset and can tolerate some imprecision, a probabilistic counter like HyperLogLog can be your best bet. (We'll return to HyperLogLog in a future blog post.) But for a quick, precise answer, some simple subqueries can save you a lot of time.

    Let's start with a simple query we run all the time: Which dashboards do most users visit?

    selectdashboards.name,count(distincttime_on_site_logs.user_id)fromtime_on_site_logsjoindashboardsontime_on_site_logs.dashboard_id=dashboards.idgroupbynameorderbycountdesc

    For starters, let's assume the handy indices on user_id and dashboard_id are in place, and there are lots more log lines than dashboards and users.

    On just 10 million rows, this query takes 48 seconds. To understand why, let's consult our handy SQL explain:

    It's slow because the database is iterating over all the logs and all the dashboards, then joining them, then sorting them, all before getting down to real work of grouping and aggregating.

    Aggregate, Then Join

    Anything after the group-and-aggregate is going to be a lot cheaper because the data size is much smaller. Since we don't need dashboards.name in the group-and-aggregate, we can have the database do the aggregation first, before the join:

    selectdashboards.name,log_counts.ctfromdashboardsjoin(selectdashboard_id,count(distinctuser_id)asctfromtime_on_site_logsgroupbydashboard_id)aslog_countsonlog_counts.dashboard_id=dashboards.idorderbylog_counts.ctdesc

    This query runs in 20 seconds, a 2.4X improvement! Once again, our trusty explain will show us why:

    As promised, our group-and-aggregate comes before the join. And, as a bonus, we can take advantage of the index on the time_on_site_logs table.

    First, Reduce The Data Set

    We can do better. By doing the group-and-aggregate over the whole logs table, we made our database process a lot of data unnecessarily. Count distinct builds a hash set for each group — in this case, each dashboard_id — to keep track of which values have been seen in which buckets.

    Instead of doing all that work, we can compute the distincts in advance, which only needs one hash set. Then we do a simple aggregation over all of them.

    selectdashboards.name,log_counts.ctfromdashboardsjoin(selectdistinct_logs.dashboard_id,count(1)asctfrom(selectdistinctdashboard_id,user_idfromtime_on_site_logs)asdistinct_logsgroupbydistinct_logs.dashboard_id)aslog_countsonlog_counts.dashboard_id=dashboards.idorderbylog_counts.ctdesc

    We've taken the inner count-distinct-and-group and broken it up into two pieces. The inner piece computes distinct (dashboard_id, user_id) pairs. The second piece runs a simple, speedy group-and-count over them. As always, the join is last.

    And now for the big reveal: This sucker takes 0.7 seconds! That's a 28X increase over the previous query, and a 68X increase over the original query.

    As always, data size and shape matters a lot. These examples benefit a lot from a relatively low cardinality. There are a small number of distinct (user_id, dashboard_id) pairs compared to the total amount of data. The more unique pairs there are — the more data rows are unique snowflakes that must be grouped and counted — the less free lunch there will be.

    Next time count distinct is taking all day, try a few subqueries to lighten the load.

    Who Are You Guys, Anyway?

    We make Periscope, a tool that makes SQL data analysis really fast. We'll be using this space to share the algorithms and techniques we've baked into our product.

    You can sign up on our homepage to be notified as we take on new customers.

    King Copied — Junkyard Sam

    $
    0
    0

    Comments:"King Copied — Junkyard Sam"

    URL:http://junkyardsam.com/kingcopied/#


    No "contract" was ever signed, this was Lars/King justifying their actions to a small indie developer that might otherwise have turned down the request to copy our game.

    Scamperghost isn't the most original game in the world.  It's obviously inspired by Pac-Man but we at least took it in an original direction by making it a mouse avoider with no walls.

    King.com, however, showed no respect for other people's intellectual property when they made a direct, blatant clone of Scamperghost. Now they've trademarked "Candy" and are using their massive legal power against other small competing developers. A bit of a double-standard, eh?

    Junkyard Sam
    junkyardsam@gmail.com

    PS. Thank you, Squarespace, for the bandwidth. Please consider them if you need a webhost.

    BBC News - 'Revenge porn' website former owner Hunter Moore arrested

    $
    0
    0

    Comments:"BBC News - 'Revenge porn' website former owner Hunter Moore arrested "

    URL:http://www.bbc.co.uk/news/technology-25872322


    23 January 2014Last updated at 17:33 ET

    US authorities have arrested two men in California for hacking email accounts and stealing nude photos to post on a so-called "revenge porn" website.

    Hunter Moore, 27, and Charles Evens, 24, face charges including conspiracy, unauthorized access to a protected computer to obtain information and aggravated identity theft.

    The men reportedly posted explicit images, submitted without the victim's permission, to IsAnyoneUp.com.

    If guilty, they face decades in prison.

    Latest legal setback

    The arrest on Thursday was the culmination of an FBI investigation into the matter, US Attorney Wendy Wu said in a statement.

    According to court documents, Mr Moore operated a website which posted sexually explicit images for the purposes of revenge.

    Mr Moore is said to have paid Mr Evens to hack into hundreds of victims' email accounts to obtain more nude photos to post on the website.

    The illegally obtained photos were then put online without the consent of those pictured.

    It is the latest legal setback for Mr Moore, who was ordered in March to pay $250,000 (£170,000) in damages for defamation resulting from a civil lawsuit.

    Mr Moore was found to have used Twitter to make false claims about the chief executive of an anti-bullying website, James McGibney.

    James McGibney alleged Mr Moore had labelled him a paedophile who possessed child pornography.

    Mr McGibney's website Bullyville.com had purchased the domain, IsAnyoneUp.com, from Mr Moore in 2012.

    Apple - Thirty Years of Mac

    How Inactivity Changes the Brain

    Legal Marijuana Businesses Should Have Access to Banks, Holder Says


    BREAKING: China dumps all bonds, declares South China Sea closed zone – CNN Political Ticker - CNN.com Blogs

    Trove: The best stories picked by people who share your interests

    Side Effect Free Rants: Functional Programming 101 - With Haskell

    $
    0
    0

    Comments:"Side Effect Free Rants: Functional Programming 101 - With Haskell"

    URL:http://blog.gja.in/2014/01/functional-programming-101-with-haskell.html


    In this blog post, I'll attempt to explain some basic concepts of Functional Programming, using Haskell. This blog post isn't about Haskell per-se, but about one way of approaching this problem, which demonstrates the benefits of functional programming.

    You can run most of these examples in ghci, by saving the contents of the example into a .hs file, loading up ghci and running :load file.hs.

    Many thanks to Mattox Beckman for coming up with the programming exercise, and Junjie Ying for coming finding a better data structure for this explanation than I came up with.

    The Problem

    You are Hercules, about to fight the dreaded Hydra. The Hydra has 9 heads. When a head is chopped off, it spawns 8 more heads. When one of these 8 heads is cut off, each one spawns out 7 more heads. Chopping one of these spawns 6 more heads, and so on until the weakest head of the hydra will not spawn out any more heads.

    Our job is to figure out how many chops Hercules needs to make in order to kill all heads of the Hydra. And no, it's not n!.

    Prelude: Simple Overview Of Haskell Syntax

    Before we dive into code, i'll explain a few constructs which are unique to Haskell, so it's easy for non Haskellers.

    • List Creation: You can create a list / array using the : operator. This can even be done lazily to get an infinite list. let firstArray = 0:1:[2, 3] -- [0, 1, 2, 3] let infiniteOnes = 1:infiniteOnes -- [1, 1, 1, 1, 1, ........................] -- never stops, hit ctrl-C to get your interpreter back
    • Defining Function: Looks just like defining a variable, but it takes parameters. One way they are different from other languages is the ability to do pattern matching to simplify your code. Here, I define a method that sums all the elements of a list. sumOfElements [] = 0 sumOfElements (x:xs) = x + sumOfElements xs
    • More List Foo: Adding lists can be done with ++. Checking if a list is empty can be done with null. You can use replicate to create a list with the same elements over and over. [1] ++ [3] -- [1, 3] null [] -- True null [1] -- False replicate 2 3 -- [3, 3]

    Choosing a data structure

    Let's choose a simple data structure to represent the hydra. We'll pick an array to represent the heads of the Hydra, using the level of each head as the value. The initial state of the Hydra (with 9 level 9 heads) can be represented as follows: [9, 9, 9, 9, 9, 9, 9, 9, 9].

    Chopping off a head

    The whole point of functional programming is to build small functions and compose them later. We'll build a few functions, specific to our domain, and a few more general ones to orchestrate.

    Let's first build a specific function to chop off the Hydra's head. We know that chopping off one level 9 head should result in 8 level 8 heads (and 8 of the original level 9 heads). This is represented as [8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9]

    Let's build the chop function. It takes a single argument, and the current levels of the all live heads. It will return the state of the heads after chopping the first one.

    The three lines of code below map to these rules:

    • If there are no heads left, just return []
    • If we find that there is a level 1 head at the start of our list, just chop it off and return the rest of the array
    • If we find that there is a higher level head at the start of our list, spawn n - 1 heads in it's place
    chop [] = [] chop (1:xs) = xs chop (n:xs) = (replicate (n - 1) (n - 1)) ++ xs ---------------------------------------------------- chop [1] -- [] chop [4] -- [3, 3, 3] chop [3, 3, 3] -- [2, 2, 3, 3] chop [9,9,9,9,9,9,9,9,9] -- [8,8,8,8,8,8,8,8,9,9,9,9,9,9,9,9]

    Repeatedly chopping heads

    Our function chop is a pure function as, given some input, it always returns the same output, without any sort of side effects. Side effects is a general term for modifying inputs / IO Operations / DB Calls, and so on.

    Since chop is pure function, we can safely call it over and over. Let's build a list where each element is the result of chopping the previous element.

    repeatedlyChop heads = heads:repeatedlyChop (chop heads) ---------------------------------------------------------- repeatedlyChop [3] -- [[3],[2,2],[1,2],[2],[1], [], [], [] ...] -- this is an infinite list

    This paradigm is so common, that we have a functional construct that does this: iterate. We can replace the above code with the following:

    repeatedlyChop heads = iterate chop heads

    Truncate that infinite list

    Great, we now have built a list of all the states the Hydra is in while Hercules is busy chopping away at it. However, this list goes on forever (we never put in a termination condition in the earlier code), so let's do that now.

    We can use a simple empty check (null) to test if the hydra is still alive. Let's keep items as long as the Hydra is alive

    takeWhileAlive (x:xs) = if null x then [] else x:(takeWhileAlive xs)

    Putting the two together

    repeatedlyChopTillDead heads = takeWhileAlive (repeatedlyChopTillDead heads) ---------------------------------------------------------------------------- repeatedlyChopTillDead [4] -- [[4],[3,3,3],[2,2,3,3],[1,2,3,3],[2,3,3],[1,3,3],[3,3],[2,2,3],[1,2,3],[2,3],[1,3],[3],[2,2],[1,2],[2],[1]]

    Again, these patterns are so common, that we can replace the entire thing with a single line. takeWhile keeps things in the list until the first element that doesn't match.

    repeatedlyChopTillDead heads = takeWhile (not.null) (iterate chop heads)

    Finishing up

    Now that we have the sequence of chops needed to kill that Hydra, figuring out the number of chops is just a matter of figuring out how long the sequence is.

    countOfChops heads = length (repeatedlyChopTillDead heads) -------------------------------------------------- countOfChops [1] -- 1 countOfChops [3] -- 5 countOfChops [9,9,9,9,9,9,9,9,9] -- 986409 (this takes a while)

    Extending

    Now that we've solved the problem, what next? How easy is it to extend this? Let's add a new requirement: Hercules, though a half god, can only fight at most n Hydra heads at a time. If the number of Hydra heads goes above n, then hercules dies. Let's make a function willHerculesDie which takes two parameters, n and the Hydra.

    Turns out, this is pretty simple. We just need to count all the heads that are alive. If the count is more than n at any point, then we return true, and Hercules dies.

    willHerculesDie n heads = any (> n) (map length (repeatedlyChopTillDead heads)) ---------------------------------------------------------------------------- willHerculesDie 37 [9,9,9,9,9,9,9,9,9] -- False willHerculesDie 36 [9,9,9,9,9,9,9,9,9] -- True

    So what next?

    We've built a bunch of really composable functions, and we can look at each one independently to tune the system.

    You can get Haskell set up with the Haskell Platform and play around with the code here.

    Some great books you can check out:

    If you liked this post, you could:

    Google Play Store Ratings Drop | The TestFairy blog

    $
    0
    0

    Comments:"Google Play Store Ratings Drop | The TestFairy blog"

    URL:http://blog.testfairy.com/google-play-store-ratings-drop/


    Posted on January 1st, 2014 by Jenni

    One of the worst nightmares of any developer is to wake up in the morning and find out their app’s rating has suddenly crashed.  This is exactly what happened to many Android app developers who noticed unusual activity with their Google Play Store ratings after December 10, 2013.  Unfortunately, it wasn’t a flurry of 5 stars.  Instead, developers began to sweat it out, as they watched 1′s, 2′s, and 3′s slowly diminish their 4 – 5 ratings. After research and reaching out to Android app developers through Google+, Facebook, and online forums, a picture of what happened and the unintended, surprising results has been crafted by Android app developers.

    Amir Uval– the developer behind Countdown Timer, an interval timer and alarm, started a discussion on Facebook and Google+ to reach out to other Android app developers and “discovered I’m not alone.” In his Google+ discussion, details have emerged, links to similar discussions online have been shared, and he “finally put all the puzzle pieces together and concluded what was the source.” Mr. Uval talked with us about his experience and findings. He says, “I’ve noticed a strange a flow of low ratings on Dec 10. I’ve been getting a 1 or 2 once a month before that, and I started to get more negative reviews on a daily basis.”

    This graph shows the dramatic shift that prompted Amir Uval’s Google+ discussion.

    Around December 10, 2013, the Google Play Store added a new feature called, “Want Quick Suggestions?” An app appears on the screen, and the user is encouraged to offer a rating without the opportunity to provide a comment. The rating appears to help Google make better suggestions for Android app purchases and downloads by a user’s assessment of the suggestion. But, Uval discovered the rating of the suggestion is converted to a rating for the app. He also discovered that users who touch on the stars as their finger scrolls on the screen could leave a rating. This updated interface seems to provide more opportunities for unintentional, random ratings. This same assessment of the problem is discussed on Reddit and Android forums.

    Paolo Conte joined Uval’s Google+ discussion and shared a graph with us that is nearly identical.  Conte’s app, Trains Timetable Italy (Orario Treni), “had a rating of 5 stars (4.8) for a long time.”  Conte says, “In Italy it is the number 1 app in the transportation category, and it is also featured in the Best of 2013 section.”  And, again, he shares a similar theme to Uval’s experience, “Since Dec 10th I started noticing a lot of 1 star ratings, but with no negative comments.”

    “As you can see in the chart below, which covers a time span of one year, it is clear this is just wrong.” – Paolo Conte[/caption]

    Mateusz Mucha is an Android app developer based in Krakow, Poland, whose app, Percentage Calculator, has suffered a similar fate as Uval and Conte.  After December 10, he noticed an increase in 1 ratings on what had previously been a 4.7 rated app.  He said, “Over the next 3.5 weeks, Percentage Calculator received over twice as many 1-star ratings than in its whole 14-month history.”  Mucha took a look at the “Want Quick Suggestions?” app rating feature and concluded, “I’m only sure of two things: I cannot fairly rate it and Google makes me do it.”  The required participation of users who may or may not understand what they are evaluating is creating unnecessary confusion; and, with graphs like Mucha’s below, frustrated developers are losing sleep.

    Mucha’s Percentage Calculator “had approx. 27 1-star ratings on December 10, now it has 92.”

    The game, Move: A Brain Shifting Puzzle, has experienced the same pattern.  Noam Abta, the developer, said, “It had a very steady average rating of around 4.7, until around the 10 of December, it started to drop gradually and continuously.”  Abta’s graph below is yet another example of a highly rated app in the Google Play Store experiencing a decline on December 10, 2013.  Abta added, “The frustrating part was that most of the commented reviews we got were still enthusiastic 5 star reviews.”

    Abta’s “Move: A Brain Shifting Puzzle” launched strongly in October 2013.

    Combining the rating of a suggestion with the rating of a specific app’s performance creates a gauge that is more difficult to use and implement in the development process. Uval says, “They just don’t mix – suggestion box asks for relevance, and rating – for overall quality and overall satisfaction with an app.” Right now, developers are struggling to understand their diminishing ratings in light of the commingled ratings and inability to receive comments and feedback from the “Want Quick Suggestions?” interface.

    Updates to the Google Play Store interface, ratings, and data affect developers, and hopefully Google will respond to their concerns quickly. Uval suggests “a little note in the developer console” to inform developers of changes. And, as many developers note in forums, the ratings should be separated.

    For developers who share Uval’s experience, this discussion about the ratings change also revealed an unfortunate timing issue. Many of their Google connections are enjoying a holiday vacation, as one of Uval’s Google contacts “autoreplied he is on vacation.”

    Right now, unfortunately, there are few options for developers impacted by the ratings. Developers are reaching out to contacts at Google, creating an online conversation, and hoping users swipe anywhere other than the ratings interface. Uval had a bump up in positive ratings after he released an update. He said, “I guess many of my happy users had a chance to rate.”

    If you are an Android developer who has experienced ratings changes as a result of the “Want Quick Suggestions?” feature around December 10, 2013 and would like to add their story, please contact us and we will be happy to include your experience in this post.

    Follow TestFairy on Twitter, Google+ and Facebook.

    Learn more about TestFairy here

    Tags: Android app, Countdown Timer Android App, Google Play Store Ratings Drop, Google Play Store Update, Orario Treni Android App, TestFairy, Want Quick Suggestions?

    Filed under: Android news by Jenni
    No Comments »

    LessMilk | One new html5 game per week

    00:45 why can't just everyone follow and understand maths 00:45 < - Pastebin.com

    $
    0
    0

    Comments:"00:45 <@[-wolong-]> why can't just everyone follow and understand maths 00:45 < - Pastebin.com"

    URL:http://pastebin.com/tZYvK7UD


  • 00:45 <@[-wolong-]> why can't just everyone follow and understand maths

  • 00:45 <@Tuxedage> Why else are you in this channel?

  • 00:45 < Kokes> Hinv i bought in at 219, and sold everything, if the price drops it doesnt matter

  • 00:45 < thatslifeson> still missing 70+ BTC on 185

  • 00:46 < smshba> should we buy in now @ 195 [-wolong-]?

  • 00:46 < Kokes> wolong knows what he does :)

  • 00:46 <@[-wolong-]> FUCK

  • 00:46 <@[-wolong-]> PULL OUT

  • 00:46 <@k33k> it doesn't work if most of us chicken out

  • 00:46 < molunbun> sigh people don't cooperate

  • 00:46 <@[-wolong-]> fuck to whose who aint cooperating

  • 00:46 < sh0lla> PULL OUT

  • 00:46 < hinv> Tuxedage we keep going down and the fishes will look at the technicals and avoid as well

  • 00:46 <@[-wolong-]> i can't do shit if no one is cooperating

  • 00:46 < thatslifeson> ^

  • 00:46 <@k33k> we need a way to identify people who aren't cooperating and kick them

  • 00:46 < thatslifeson> i'm out

  • 00:46 < alphaw0lf> what u mean pull out?!

  • 00:46 < alphaw0lf> we cant abort now

  • 00:46 <@[-wolong-]> FUCK TO THOSE WHO ARE SELFISH SERIOUSLY

  • 00:46 < sh0lla> Delete your sells

  • 00:46 < thatslifeson> remove 185

  • 00:46 < alphaw0lf> how we gonna buy back

  • 00:46 < alphaw0lf> omg

  • 00:46 < otherjoe> I pulled out

  • 00:46 < Nekomata3> :|

  • 00:46 < Tyme> I canceled

  • 00:46 < Crat0s> SERIOUSLY>?

  • 00:46 < Crat0s> my shit all sold

  • 00:46 < hinv> [-wolong-] we ain't cooperating because we are bleeding down here

  • 00:46 < JohnathonDoe>  /away

  • 00:46 < sh0lla> someone bought

  • 00:46 < Kokes> oh my dayz

  • 00:46 < todamoon_> all my orders jus tsold at 185

  • 00:46 < stormcrow> I sold about 33% before we pulled.

  • 00:46 < dogetrader99> WTF!?!?!??!?!

  • 00:47 < rowtrip> this is the second time i got burned on this... damn

  • 00:47 < JohnathonDoe> I am tired of people not following something so simple

  • 00:47 < maursader> hinv, it's about averages

  • 00:47 < thatslifeson> lol, you only bleed because you don't cooperate, get outta here

  • 00:47 < alphaw0lf> wolong -> thus bot needed

  • 00:47 < JohnQnS> i sold out all also 185

  • 00:47 < otherjoe> it sold mine after I canceled it

  • 00:47 < maursader> When [-wolong-] makes you sell at a loss, you pick up more at the bottom price

  • 00:47 < Nekomata3> i sold too

  • 00:47 < maursader> then you sell at the higher price

  • 00:47 < Nekomata3> lol

  • 00:47 < maursader> so its all legit

  • 00:47 < drastic> that was bad i have loss

  • 00:47 < rowtrip> lost 100000 on 185

  • 00:47 < coj> there's a 31btc buy stack

  • 00:47 < cokea> come guys why is this happening

  • 00:47 < cokea> the idea is to lower the cost

  • 00:47 < Tyme> you have to be faster here guys, it's all about seconds

  • 00:47 <@[-wolong-]> TELL ME HOW ARE YOU BLEEDING

  • 00:47 < Nekomata3> noooo

  • 00:47 <@k33k> everyone who lost there got screwed by everyone who didn't cooperate

  • 00:47 < Kinci1> guys i just sold all mine at 185

  • 00:47 <@[-wolong-]> IF U CUT YOUR COST BY 15 SATOSHIS

  • 00:47 < Kinci1> whats everyone doing

  • 00:47 -!- OrderZero [~order@c-76-106-163-145.hsd1.fl.comcast.net] has joined #marketmakers

  • 00:47 <@k33k> there were 300+BTC on at 373

  • 00:47 < maursader> If you follow [-wolong-] you'll end up gaining

  • 00:47 <@k33k> and like 29BTC max at 185

  • 00:47 < Nekomata3> yes

  • 00:47 < stormcrow> hinv: We are CONTROLLING THE PRICE. What we just did was "Sell before the price drops"

  • 00:48 < Nekomata3> it would go back even lower

  • 00:48 < alphaw0lf> we are bleeding now if we dont pull it thru and push down to 170ish

  • 00:48 < alphaw0lf> could we please continue

  • 00:48 < Crat0s> we would have been fine if everyone went in ... hinv i bought in higher than taht too

  • 00:48 < Nekomata3> ^

  • 00:48 < alphaw0lf> or what

  • 00:48 -!- mode/#marketmakers [+m] by [-wolong-]

  • 00:48 <@[-wolong-]> DOESN'T MATTER IF U BUY AT 210

  • 00:48 <@[-wolong-]> 300

  • 00:48 <@[-wolong-]> I DON'T CARE

  • 00:48 <@[-wolong-]> YOU DON'T CARE ABOUT PRICES

  • 00:48 <@[-wolong-]> WE SELL 185

  • 00:48 <@[-wolong-]> WE MOVE PRICES DOWN FAST

  • 00:48 <@[-wolong-]> WE BUY BACK

  • 00:48 <@[-wolong-]> 170

  • 00:48 <@[-wolong-]> YOU SAVED 16

  • 00:48 <@[-wolong-]> 15

  • 00:48 <@[-wolong-]> UNLESS YOU WANT TO WAIT

  • 00:48 <@[-wolong-]> HOW MANY DAYS?WEEKS? MONTHS BACK TO 210? 300?

  • 00:48 <@[-wolong-]> WE DON'T WAIT FOR MARKET TO MOVE

  • 00:48 <@[-wolong-]> WE MOVE THE MARKET

  • 00:48 <@[-wolong-]> DAMN IT

  • 00:49 -!- mode/#marketmakers [-m] by [-wolong-]

  • 00:49 < OrderZero> Hey all, what's the latest?

  • 00:49 < smshba> [-wolong-], best of worst situation, do we buy back now? or hold for lower?

  • 00:49 < maursader> Movie quote right there.

  • 00:49 < Kinci1> wolong is right!

  • 00:49 <@[-wolong-]> samsh buy back

  • 00:49 < hinv> [-wolong-] because I baught heavy at 210.  So what buy back 170, the fishes think the trend is heading back to 100 and wouldn't want to wait.

  • 00:49 < Kinci1> everyone jump on and lets take control

  • 00:49 <@[-wolong-]> BUY BACK COST, im sorry to those who lost

  • 00:49 < smshba> thanks

  • 00:49 < molunbun> selfish people screwed us

  • 00:49 <@[-wolong-]> buy back a few satoshis higher

  • 00:49 < Crat0s> [-wolong-] can we do another push down to about 170?

  • 00:49 < Crat0s> I need to get my doge back

  • 00:49 <@k33k> [-wolong-]: dont' have to be sorry. not your fault

  • 00:49 <@Tuxedage> hinv, if you're not going to participate, please leave. No offence.

  • 00:49 < alphaw0lf> im writing bot .. we wont have this problem soon.. as long as everyone clicks ok...

  • 00:49 <@k33k> blame is on everyone who chickened out

  • 00:49 <@Tuxedage> But we don't need you here.

  • 00:49 < Kinci1> hinv, you sell at 185 buy at 170 so your no cost is like 200's not 201

  • 00:49 < Kinci1> 210

  • 00:49 < Staynov> It didn't work this time. The Best thing to do right now is buy, we alter pump

  • 00:49 <@[-wolong-]> HINV DOESN'T MATTER IF IT GOES TO 0

  • 00:49 < alphaw0lf> i can have bot send list of names for those that actually trade

  • 00:49 <@[-wolong-]> WE SOLD FIRST, WE CUT OUR COST

  • 00:49 <@[-wolong-]> DAMN IT

  • 00:49 < Kinci1> i did

  • 00:50 < Kinci1> 30 btc

  • 00:50 < alphaw0lf> :/

  • 00:50 < Nekomata3> there goes 0.1 btc

  • 00:50 < dogetrader99> fuck

  • 00:50 < JohnQnS> buying back now?

  • 00:50 < OrderZero> Obviously I missed something big :/

  • 00:50 <@[-wolong-]> even if PRICES WERE TO MOVE TO 50 BECAUSE OF OUR SHORT

  • 00:50 < OrderZero> damn time zones

  • 00:50 <@[-wolong-]> WE STILL PROFIT

  • 00:50 < Nekomata3> ye

  • 00:50 <@Tuxedage> hinv, it's very simple math. A trader doesn't care about what price used to be, only what the price is RIGHT NOW.

  • 00:50 < JohnQnS> or better stay btc and wait or how we are doing?

  • 00:50 <@k33k> I want to see screenshots on the next action

  • 00:50 < molunbun> yes

  • 00:50 < JohnathonDoe> OrderZero: nothing more than a lot of idiots ruining things

  • 00:50 < BJUK> ppl re so used to buying before selling that they mess up their maths

  • 00:50 < molunbun> we'll have more doge

  • 00:50 < Kokes> Oh my days, even I understand this this, just listen to wolong, we own the market, we can do everything we want, as long as we TRUST wolong.

  • 00:50 < Kokes> where's the trust

  • 00:50 <@Tuxedage> ^^^

  • 00:50 <@Tuxedage> Please cooperate.

  • 00:50 < OrderZero> JohnathonDoe: I see

  • 00:50 < alphaw0lf> wolong we only profit if we can get prices back up afterwards...

  • 00:50 <@[-wolong-]> What's everyone average loss?

  • 00:50 < alphaw0lf> so lets get them up and retry?

  • 00:50 < Nekomata3> Kokes exac

  • 00:50 < OrderZero> 0 :D

  • 00:51 < molunbun> 2.7%

  • 00:51  * OrderZero has never lost anything on this market....yet

  • 00:51 < smshba> god damn that hurt

  • 00:51 < Kinci1> 3.5%

  • 00:51 < Solnse> if I buy back again now at a loss what is to stop it from happening agian

  • 00:51 < alphaw0lf> or keep goin down to 170 like original plan

  • 00:51 <@Tuxedage> I lost around 4%, I think.

  • 00:51 <@Tuxedage> Need to calculate.

  • 00:51 < alphaw0lf> im good with both :/

  • 00:51 < Kokes> 3-4 %

  • 00:51 <@[-wolong-]> hinv REMEMBER ONE THING DOESN'T MATTER IF YOU ARE BUY AT 210, AND THEN U WAIT TO 280, IF YOU DON'T SELL THERE IS NOT PROFIT

  • 00:51 < otherjoe> I have no idea, I just lost

  • 00:51 < Tyme> 3%, its ok

  • 00:51 < coj> i haven't bought back, oh well. i sit out of the market

  • 00:51 < Zimnx> 1.7%

  • 00:51 <@[-wolong-]> PROFIT IS ONLY CALCULATED ONCE IT ENTERS YOUR POCKET

  • 00:51 < Crat0s> most of thsoe doges I just sold were bought above 220 ... kidna sucked.

  • 00:51 < cokea> it's all virtual yeah

  • 00:51 < stormcrow> heh. I can't even get back in. cryptsy lost my buy order.

  • 00:51 <@Tuxedage> No worries. wolong, you've given me a lot more than 4% gains, so I don't care even if I lost this.

  • 00:51 <@[-wolong-]> damn alphaw0lf

  • 00:51 < drastic> there was a panic call on chat

  • 00:52 <@[-wolong-]> i suggest you finish the trading api fast

  • 00:52 < drastic> thats what happend

  • 00:52 < alphaw0lf> no doubt

  • 00:52 <@[-wolong-]> so that i can control everyone trade

  • 00:52 < alphaw0lf> im working on it...

  • 00:52 <@[-wolong-]> too much selfish people making everyone losing

  • 00:52 < alphaw0lf> we can have it announce nicknames once a trade is made

  • 00:52 < mr_dick> damn got burned 3%

  • 00:52 < WarAileron> i was all in :s

  • 00:52 < Kokes> damn right

  • 00:52 < alphaw0lf> so we know who isnt trading

  • 00:52 < molunbun> stormcrow, place a 100doge order to force a refresh

  • 00:52 < todamoon_> look all you have one guy that has the market in control just follow the fucking request through

  • 00:52 < todamoon_> you came this far to this channel

  • 00:52 < molunbun> stormcrow, sell 100doge

  • 00:52 < todamoon_> so do it fuck faces

  • 00:52 < Zimnx> alphaw0lf i can join development if you have some kind of version control system

  • 00:52 < mr_dick> [-wolong-], that's why i said we need a scheduled event man

  • 00:52 < stormcrow> yeah. On that, mol.

  • 00:52 <@Tuxedage> Exactly

  • 00:52 -!- Solweintraub__ [~Solweintr@cpe-184-57-68-90.columbus.res.rr.com] has quit [Read error: Connection reset by peer]

  • 00:52 < thatslifeson> IGNORE THE PRICES, they do not matter, what matters is making a profit your trades. If you have cold feet on making a move, and you doubt wolong's ability to coordinate and trade, just do yourself, and everyone in here a favor, and /quit.

  • 00:52 < mr_dick> if everyone comes prepared during a scheduled event

  • 00:52 < mr_dick> noone would hesitate that much

  • 00:52 < Nekomata3> you just had to follow the damn train ;-;

  • 00:53 < sh0lla> I pulled out thankfully but I feel awful for those who didnt in time )))):

  • A ROMAN GLASS GAMING DIE | CIRCA 2ND CENTURY A.D. | Christie's

    $
    0
    0

    Comments:"A ROMAN GLASS GAMING DIE | CIRCA 2ND CENTURY A.D. | Christie's"

    URL:http://www.christies.com/Lotfinder/lot_details.aspx?intObjectID=4205385


    Lot Description

    A ROMAN GLASS GAMING DIE
    Circa 2nd Century A.D.
    Deep blue-green in color, the large twenty-sided die incised with a distinct symbol on each of its faces
    2 1/16 in. (5.2 cm.) wide

    Provenance

    Acquired by the current owner's father in Egypt in the 1920s.

    Pre-Lot Text

    THE PROPERTY OF A
    MARYLAND FINE ARTS PROFESSOR

    View Lot Notes >

    Lot Notes

    Several polyhedra in various materials with similar symbols are known from the Roman period. Modern scholarship has not yet established the game for which these dice were used.


    Why This Mars Rover Has Lasted 3,560 Days Longer Than Expected - SFGate

    $
    0
    0

    Comments:"Why This Mars Rover Has Lasted 3,560 Days Longer Than Expected - SFGate"

    URL:http://www.sfgate.com/technology/businessinsider/article/Why-This-Mars-Rover-Has-Lasted-3-560-Days-Longer-5173078.php


    The mission wasn't supposed to last more than 90 days. But 10 years later, NASA's Opportunity rover is still in working condition and continues to send back data from Mars.

    The prolonged health of the rover "was not in anyone's wildest dreams," John L. Callas, a project manager for the Mars Exploration Rover mission, wrote in an editorial for Space.com.

    Opportunity launched on July 7, 2003. The golf-cart-sized robot landed on Mars on Jan. 24, 2004, Pacific Time.

    Its identical twin, Spirit, touched down on the other side of Red Planet a couple of weeks earlier, but stopped talking to Earth in 2010, around six years into its mission.

    So why has the Opportunity rover been able to outlast its designed lifetime by thousands of days? On its website, NASA attributes the robot's staying power to "a combination of sturdy construction, creative solutions for operating the rovers and even a little luck!"

    "It's a well-made American vehicle," Ray Arvidson, the rover's lead investigator was quoted by The Register as saying. "These are excellent machines, they are well designed, they're well built, they're fantastic and that's why they're still working."

    Mars is no vacation spot. The ground is covered with sharp rocks and steep hills, dirttornadoes whirl around the surface, and the planet goes through wild temperature swings — from 80 degrees in the Martian summer to as low as -199 degrees in the winter.

    When tackling rough terrain, the rover doesn't topple over on its six wheels because of a "rocker-bogie" suspension system that was designed for stability. You can see a good explanation of how the rocker-bogie works here.

    Heaters in the body of the unit help batteries and other temperature-sensitive equipment to continue operating in the bitter-cold.

    In addition, scientists originally thought that all of the Martian dust blowing around would coat the robot's solar panels, rendering them unusable within a few months. Instead, strong winds have helped to keep the panels relatively clean.

    Callas agrees that the rover's success is due to both human ingenuity and factors that are beyond our own comprehension:

    We can assert that unexpected wind gusts blew the dust off the solar arrays to maintain rover power. We can claim that operational skill permitted the rovers to survive the cold, dark Martian winters. We can pride ourselves that we built exquisite roving machines.

    All of these are true. But does that really explain this unimagined longevity and the tremendous scientific success? Whatever the explanation, this is a grand accomplishment.

    The prime mission of both rovers was to search for geological clues about environmental conditions on early Mars — like evidence of water — and to determine whether those conditions would have been suitable for life. By looking for rocks and soil types that typically form in water, both rovers confirmed that streams, lakes, or rivers once flowed on Mars.

    To date, Opportunity rover has driven about 24 miles and snapped more than 170,000 images. While she's still running, the rover is starting to feel the effects of old age — two of the robot's 10 instruments have stopped working and its robotic arm has become "arthritic," Callas wrote.

    Opportunity is currently stationed at the edge of an exposed outcrop on the rim of Endeavor Crater, where satellite observations suggest that small amounts of clay minerals might be present.

    The aging rover is also continuing to investigate a mysterious jelly-doughnut-shaped rock that appeared in pictures of the same patch of ground taken 13 days apart. Scientists think the rock was probably flicked into its new position by one of the rover's wheels as the machine was turning around. The rock, which has a white rim and center that is deep red, is unlike anything scientists have seen before on Mars.

    Join the conversation about this story »

    See Also:

    SEE ALSO: See what the Opportunity rover was doing a year ago

    A Restful Micro Framework In Go

    $
    0
    0

    Comments:"A Restful Micro Framework In Go"

    URL:http://dougblack.io/words/a-restful-micro-framework-in-go.html


    Posted January 25th, 2014

    Why Go?

    After reading countless articles about the wonders of Go, I wanted to learn more. But, learning a new language— I mean really learning it, not just getting familiar with the general capabilities and syntax—is hard. I'm sure there are tons of books that would purport to teach me all about it, but there is no substitute for hands-on learning.

    I'm a maintainer on the Flask-RESTful framework (which is written in Python) but I've never built a RESTful framework from the ground up. This should be a great opportunity to learn Go.

    Let's do it.

    Oh, and I've already thought of a super clever name: sleepy.

    Meet net/http

    Before we start on the RESTful part of our framework, let's get familiar with how to use the http package to build a simple webserver in Go.

    The http package lets us map request paths to functions. It's pretty simple, but provides plenty for us to work with.

    packagemainimport("net/http")funcresponse(rwhttp.ResponseWriter,request*http.Request){rw.Write([]byte("Hello world."))}funcmain(){http.HandleFunc("/",response)http.ListenAndServe(":3000",nil)}

    When we go run this program, you'll notice it doesn't terminate. This is because it's listening for connections on port 3000! Let's make a request.

    $ curl localhost:3000
    Hello world.

    Great. Now that we know how to build a simple server in Go. Let's make it RESTful.

    Resource

    Defining types is always a great place to start.

    REST is all about resouces, so let's start with a Resource type. In REST, we interact with a resource using HTTP methods. An HTTP method will change or query the resource's state, and the resource will respond with a status code and a potential body. The type of the body could be anything. I think we've defined enough about a resource to create the following type.

    typeResourceinterface{Get(valuesurl.Values)(int,interface{})Post(valuesurl.Values)(int,interface{})Put(valuesurl.Values)(int,interface{})Delete(valuesurl.Values)(int,interface{})}

    We've defined a Resource interface that defines the four most common HTTP methods. The methods each take a url.Values argument which is just defined as a simple map in url:

    typeurl.Valuesmap[string][]string

    These functions return multiple arguments, an int, and an interface{}. The int will be the status code of the response, while the interface{} will be the data (in any format) the method returns.

    405 Not Supported

    But, not every resource will want to implement all of these methods. How can we provide a default implementation of all methods that return a 405 Method Not Supported when called? Go doesn't allow interfaces to provide default implementations, so I thought I was blocked. Then I found out about embedding.

    Basically, if you declare a struct A in the definition of another struct B, B gets all of the methods of A, with the caveat that the receiver of all of the embedded methods will be A. Using this, I came up with the following solution:

    typeGetNotSupportedstruct{}func(GetNotSupported)Get(valuesurl.Values)(int,interface{}){return405,map[string]string{"error":"Not implemented"}}

    The definition of a Resource that only supports Get looks like this.

    typeMyResourcestruct{PostNotSupportedPutNotSupportedDeleteNotSupported}

    It's not the sexiest solution, but the only one I could think of while still using idiomatic Go (i.e. not using the reflect package).

    API

    The next type we'll construct is our API type. An APIcould contain many internal fields, but let's keep it simple and have our API just be a receiver for methods that manage our resources.

    So, we make it an empty struct.

    Putting It All Together

    Revisiting our simple Go webserver, we quickly encounter a problem. OurResource methods are of type:

    func(url.Values)(int,interface{})

    but http.HandleFunc requires a function of type:

    func(http.ResponseWriter,*http.Request)

    We need a way to rectify this discrepancy. Initially, this didn't quite feel possible. I couldn't see how to convert a method likeGet(values url.Values) (int, interface{}) to the type I needed. It just didn't match up.

    Then I remembered Go has support for first class functions! We can pass http.HandleFunc a function of the correct type that turns around and calls one of the Resource functions. Here's what that looks like.

    func(api*API)requestHandler(resourceResource)http.HandlerFunc{returnfunc(rwhttp.ResponseWriter,request*http.Request){method:=request.Method// Get HTTP Method (string)request.ParseForm()// Populates request.Formvalues:=request.Formswitchmethod{case"GET":code,data=resource.Get(values)case"POST":code,data=resource.Post(values)case"PUT":code,data=resource.Put(values)case"DELETE":code,data=resource.Delete(values)}// TODO: write response}}

    So requestHandler returns a function of the correct type forHandleFunc. That function dispatches to the correct Resource method!

    Solving this problem made me pretty happy. At first, I was stuck because I was trying to solve this using object-oriented design. It was a lot of fun to be initially frustrated by a perceived hole in Go's design and then realize I was approaching the problem from the wrong perspective. Very cool.

    Anyways, back to our API.

    Meet encoding/json

    After we've received the data (of type interface{}) from theResource method, we need to turn it into JSON. Conveniently, Go contains an encoding/json package that does just this.

    In json, there is a function json.NewEncoder that looks like this.

    funcNewEncoder(wio.Writer)*Encoder

    The Encoder provides an Encode method that serializes aninterface{} (remember, this is any type in Go) to JSON. (Thanks twitter!). You'll notice it takes anio.Writer—which is an interface satsified by our http.ResponseWriter. Perfect. Here's what this looks like.

    code,data=resource.Get(values)encoder=json.NewEncoder(rw)// rw is http.ResponseWriterencoder.Encode(data)// calls ResponseWriter.Write()

    Pretty straightforward. Of course, in the final version, we'll need to make sure we check the error code returned from Encode, but you get the idea.

    Construct The Response

    Now that we have a status code and a body, we need to actually send the response to the client. Unsurprisingly, this is also pretty easy with the Go standard library. We just use thehttp.ResponseWriter interface. You'll notice it's already an input parameter to our anonymous function:

    func(http.ResponseWriter,*http.Request)

    It's a pretty simple interface.

    rw.WriteHeader(code)rw.Write(content)

    That's it.

    Finish The API

    We can now take a Resource and convert it to a method we can give to http.HandleFunc. Let's make a convenience method on ourAPI struct that makes this easy.

    func(api*API)AddResource(resourceResource,pathstring){http.HandleFunc(path,api.requestHandler(resource))}

    and a method that starts our API on a given port.

    func(api*API)Start(portint){portString:=fmt.Sprintf(":%d",port)http.ListenAndServe(portString,nil)}

    and finally, the api.Abort method we referenced earlier.

    func(api*API)Abort(rwhttp.ResponseWriter,statusCodeint){rw.WriteHeader(rw)}

    Usage

    Whew. That was a lot of code snippets. Let's take a second to reflect on what we've built by constructing an actual example. Let's assume we've saved all of the above code into a sleepy module.

    packagemainimport("net/url""sleepy")typeHelloResourcestruct{sleepy.PostNotSupportedsleepy.PutNotSupportedsleepy.DeleteNotSupported}func(HelloResource)Get(valuesurl.Values)(int,interface{}){data:=map[string]string{"hello":"world"}return200,data}funcmain(){helloResource:=new(HelloResource)varapi=new(sleepy.API)api.AddResource(helloResource,"/hello")api.Start(3000)}

    Now we run the program and hit our new endpoint!

    curl localhost:3000/hello{"hello": "world"}

    So, we construct a struct that implements Resource, assign it to a path and start our API. Pretty simple! We've built a working RESTful framework in Go.

    Improvements

    There are a few things I'd like to add to sleepy.

    • Parsing values out of the URL.
    • Parameter type validation
    • Support for responding with custom headers.

    But not much more than that. After all, it's supposed to be amicro-framework.

    Full Code

    Here's the entire 94-line framework. If you're a Go expert, please let me know how you would have done this better!@dougblackio.

    packagesleepyimport("encoding/json""fmt""net/http""net/url")const(GET="GET"POST="POST"PUT="PUT"DELETE="DELETE")typeResourceinterface{Get(valuesurl.Values)(int,interface{})Post(valuesurl.Values)(int,interface{})Put(valuesurl.Values)(int,interface{})Delete(valuesurl.Values)(int,interface{})}type(GetNotSupportedstruct{}PostNotSupportedstruct{}PutNotSupportedstruct{}DeleteNotSupportedstruct{})func(GetNotSupported)Get(valuesurl.Values)(int,interface{}){return405,""}func(PostNotSupported)Post(valuesurl.Values)(int,interface{}){return405,""}func(PutNotSupported)Put(valuesurl.Values)(int,interface{}){return405,""}func(DeleteNotSupported)Delete(valuesurl.Values)(int,interface{}){return405,""}typeApistruct{}func(api*API)Abort(rwhttp.ResponseWriter,statusCodeint){rw.WriteHeader(statusCode)}func(api*API)requestHandler(resourceResource)http.HandlerFunc{returnfunc(rwhttp.ResponseWriter,request*http.Request){vardatainterface{}varcodeintrequest.ParseForm()method:=request.Methodvalues:=request.Formswitchmethod{caseGET:code,data=resource.Get(values)casePOST:code,data=resource.Post(values)casePUT:code,data=resource.Put(values)caseDELETE:code,data=resource.Delete(values)default:api.Abort(rw,405)return}content,err:=json.Marshal(data)iferr!=nil{api.Abort(rw,500)return}responseWriter:=json.NewEncoder(rw)rw.WriteHeader(code)ifresponseWriter.Encode(data)!=nil{api.Abort(rw,500)return}}}func(api*API)AddResource(resourceResource,pathstring){http.HandleFunc(path,api.requestHandler(resource))}func(api*API)Start(portint){portString:=fmt.Sprintf(":%d",port)http.ListenAndServe(portString,nil)}

    Conclusion

    I hope this was informative. I definitely learned a lot about the net package in Go and got a chance to cut my teeth on Go's design. Again, if you see anything you would have done better, please let me know! @dougblack.

    sleepy is on github.

    afaqurk/linux-dash · GitHub

    Netflix Is Going to Rule TV After All | Wired Business | Wired.com

    $
    0
    0

    Comments:"Netflix Is Going to Rule TV After All | Wired Business | Wired.com"

    URL:http://www.wired.com/business/2014/01/turns-netflix-going-rule-tv


    Netflix CEO Reed Hastings could barely contain himself. On Wednesday, after reporting another stellar financial quarter for the company — 2.3 million new subscribers, nearly $50 million in profits — the man who destroyed the Blockbuster empire started taunting his latest nemesis: HBO.

    When asked about HBO co-president Richard Plepler claiming that the company wasn’t hurt by people sharing their passwords for HBO Go, its streaming video app, Hastings pretended to give out Plepler’s personal login credentials: “It’s plepler@hbo.com,” Hastings said, “and his password is netflixbitch.”

    Pretty cocky for the guy who invented Qwikster. But his swagger has substance behind it. For a company that was on the death-watch list just a few years ago, Netflix has engineered one of the most remarkable corporate turnarounds in recent memory. Rising from the depths of a fading DVD-by-mail business, Hastings and company have transcended the movie rental market. Now they’re taking on television, both the industry and the medium itself. And in a way, Netflix has already won there, too.

    Rising from the depths of a fading DVD-by-mail business, Netflix has transcended the movie rental market to take on television itself.

    When Robin Wright accepted her Golden Globe earlier this month for her stunning turn in the Netflix political drama House of Cards, it was kind of a weird moment. Back in the days when the only thing Netflix meant was a red envelope in the mailbox, the idea that the company would win a Golden Globe just wouldn’t have made sense. Like Blockbuster, Netflix was in the content delivery business, not the content creation business.

    But as Wright took to the stage, Hastings must have felt a surge of satisfaction, and we’re guessing his shareholders did too, especially after seeing the company’s stock price quadruple over the past year. Netflix shares have always been volatile, especially around earnings time, with manic swings in either direction depending on the good news or the bad. But the overall upward surge — which tops even that of streaming video rival Amazon — is hard to dismiss as mere froth.

    Whatever the other underlying reasons for investor optimism, it’s impossible to ignore that the rise of Netflix has come at the same time the company launched some of the most ambitious shows in the entertainment business.

    Though they look and feel like what you’d see on TV, calling Netflix’s productions “television” doesn’t really make sense, even if they were nominated for Emmys. I, for one, watched all of House of Cards and Orange Is the New Black on my iPad. Yes, it’s possible to funnel Netflix to an actual TV via any number of devices, and soon, that might even include your cable box. But the true threat to HBO and the cable business model is that NetFlix can transcend the television itself, offering you shows on all your devices. HBO has been slow to make its HBO Go app available to viewers who don’t subscribe to its television channel, and on the whole, the cable TV industry still depends on people putting their butts on couches in front of giant flatscreens.

    Users, Not Just Viewers

    Meanwhile, Netflix charges eight bucks a month to watch anything, anytime, anywhere, painlessly. Until recently, NetFlix still seemed like a supplement to the rest of TV, especially given its sometimes spotty offerings. But its original content upended that calculus, especially because it showed a willingness and ability to make shows whose quality rivaled anything on cable.

    What’s more, Netflix isn’t hamstrung by anything as retro as seasons or time slots. No matter how good their shows are, the core business of HBO or Showtime or AMC is limited by how much programming they can fit into a 168-hour week. But Netflix can make as many or as few shows as it likes and put them up any time of year.

    Netflix is also willing to release an entire season of shows at one time. Despite his predilection for Jesse Pinkman-esque name-calling, Reed Hastings’ most powerful taunt to traditional television has been Netflix’s full embrace of binge-watching. Despite the rise of on-demand and DVRs, the traditional television business still depends on dictating to viewers when they should sit down and watch. By putting all the control in the hands of its now 44 million users, Netflix has shown that it understands what people want and is leading its rivals in delivering.

    That puts Netflix in control of television’s future, from how we watch to how much we pay. People may still pay for HBO to watch Game of Thrones and spring for cable to watch sports. But in the TV world, Netflix red is the new black.

    interfluidity » “Marriage promotion” is a destructive cargo cult

    $
    0
    0

    Comments:"interfluidity » “Marriage promotion” is a destructive cargo cult"

    URL:http://www.interfluidity.com/v2/4938.html


    I think I’ll basically be repeating what Matt Yglesias said yesterday, but maybe I can put things more plainly.

    “Marriage promotion” as a means of address social problems at the lower end of the socioeconomic ladder is a bad idea. It’s not a neutral idea, or a nice idea that probably won’t work. It’s inexcusably obtuse and may be outright destructive. It is quite literally a cargo cult.

    A cargo cult is a particularly colorful way of mistaking cause for effect. Airplanes do not actually come to remote Pacific Islands because of rituals performed by soldiers at airports. But absent other information, to someone with no knowledge of the larger world, it might well look that way. So when the soldiers leave and the airplanes full of valuable stuff no longer come, it’s forgivable in its way that some islanders populated the abandoned tarmacs with wooden facsimile airplanes and tried to reenact the odd dances that used to precede the arrival of wonderful machines. It is forgivable, but it didn’t work. The actual causes of cargo service to remote Pacific Islands lay in hustle of industries vast oceans away and in the logistics of a bloody war, all of which were invisible to local spectators. Soldiers’ dances on the tarmac were an effect of the same causes, not an independent source of action. That is not to say those dances were irrelevant to the great bounty from the skies. An organized airport is part of the mechanism through which the deeper causes of cargo service have their effect, so something like those dances would always be correlated with cargo service. But even a perfectly equipped and organized airport will not cause airplanes sua sponte to deliver valuable goods to islanders. A mock facsimile even less so.

    The case for marriage promotion begins with some perfectly real correlations. Across a variety of measures — household income, self-reported life satisfaction, childrearing outcomes — married couples seem to do better than pairs of singles (and much better than single parents), particularly in populations towards the lower end of the socioeconomic ladder. So it is natural to imagine that, if somehow poor people could be persuaded to marry more, they too would enjoy those improvements in household income, life satisfaction, and childrearing. Let them eat wedding cake!

    But neither wedding cake nor the marriages they celebrate cause observed “marriage premia” any more than dances on tarmacs caused airplanes to land on Melanesian islands. In fact, for the most part, the evidence we have suggests that marriage is an effect of other things that facilitate good social outcomes rather than a cause on its own. In particular, for poor women, the availability of suitable mates is a binding constraint on marriage behavior. People in actually observed marriages do well because they are the lucky ones to find scarce good mates, not because marriage would be a good thing for everyone else too. Marrying badly, that is marriage followed by subsequent divorce, increases the poverty rate among poor women compared to never marrying at all. Married biological parents who stay together may be good for child rearing, but kids of mothers who marry anyone other than their biological father do no better than children of mothers who never marry at all. As McLanahan and Sigle-Rushton put it (from the abstract):

    [U]nmarried mothers and their partners are vastly different from married parents when it comes to age, education, health status and behaviour, employment, and wage rates. These differences translate into important differences in earnings capacities, which, in turn, translate into differences in poverty. Even assuming the same family structure and labour supply, our estimates suggest that much of the difference in poverty outcomes by family structure can be attributed to factors other than marital status. Our results also suggest that full employment is essential to lifting poor families — married or otherwise — out of poverty.

    Let’s stop with the litany of citations for a minute and just think like humans. Marriage is a big deal. The stylized fact that the great preponderance of grown-ups with kids who seem economically and socially successful are married is known to everybody, rich and poor, black and white. Yes, the traditional family is not uncontested. There are, in our culture, valorizations of single-parenthood as statements of feminist independence, valorizations of male liberty and libertinism, aspirational valorizations of nontraditional families by until-recently-excluded gay people, etc. But, despite the outsized role played by Kurt on Glee, these alternative visions are numerically marginal, and probably especially marginal among the poor. Single motherhood is the alternative family structure that matters from a social welfare (rather than culture-war) perspective. The problem marriage promotion could solve, if it could solve any problem at all, would be to increase the well-being of the people who currently become single mothers and of their children.

    But why do single women choose to become single mothers? It does not, in any numerically significant way, seem to have much to do with purposeful rebellion against traditional family norms. No, marriage of poor women seems constrained by the availability of promising mates. And why might that be?

    Charles Murray recently wrote a wonderful, terrible, book called “Coming Apart“. The book is wonderful, because it identifies and very sharply observes the core social problem of our time, the Great Segregation (sorry Tyler), or more accurately, the Great Secession of the rich from the rest, and especially from the poor. The book is terrible, because it then analyzes the problems of the poor as though they come from nowhere, as though phenomena Murray characterizes as declines in industriousness, religiosity, and devotion to marriage among the poor have nothing to do with the evacuation of the rich into dream enclaves. There are obvious connections that Murray doesn’t make because, I think, he simply doesn’t wish to make them. Let’s make some. We were talking about marriage.

    Murray does a wonderful job of describing the homogamy of our socioeconomic elites. The people who, at marriageable age, seem poised to succeed economically and socially, tend to marry one another. Johnnie doesn’t marry the girl next door, who might have been a plumber’s daughter while Daddy was a bank manager. Johnnie doesn’t marry anyone at all he met in high school, but holds out for someone who got into the same sort of selective college he got into. The children of the rich marry children of the rich, with notable allowances made for children of the nonrich who have accumulated credentials that signal a high likelihood of present or future affluence. Of course, love knows no boundaries.

    As a matter of simple arithmetic, increasing homogamy among the elite and successful implies a reduced probability that a person who cannot lay claim to that benighted group will be able to “marry up”, as it were. Once upon a time, in the halcyon days that Murray contrasts to the present, the courting would not have been so crass. There were many fewer markers of social class and future affluence. The best and brightest were not so institutionally, geographically, and culturally segregated from the rest. (That is, within the community of white Americans. For black Americans, all of this is old hat.) The risk of “mismarrying”, for a male, was not so great, as he would be the primary breadwinner anyway, and her family, while perhaps poorer than his own, was unlikely to be in desperate straits. Men could choose whom they liked, in a personal, sexual, and romantic sense without great cost. Women from poor-ish backgrounds had a decent chance at landing a solid breadwinner, if not the next President. Very much like an insurance pool, a large and mixed pool of potential spouses renders marriage on average a pretty good deal for everyone. Really bad future husbands existed then as now, and then as now women were wise to do all they could to avoid marrying them. But the quality of a marriage is never revealed until well after you are in it. In a middle-class society, it was reasonable for a woman to guess that a nice guy she could fall in love with would be able to be a good husband and father too.

    Flash-forward to the present. We now live in a socially and economically stratified society. By the time we marry, we can ascertain with reasonable confidence what kind of job, income, neighborhood, and friends a potential mate is likely to come with. The stakes are much higher than they used to be. Our lifestyle norms are based on two-earner households, so men as well as women need to think hard about the earning prospects of potential mates. Increasing economic dispersion — inequality — means that it is quite possible that a potential mate’s family faces circumstances vastly more difficult than ones own, if one is near the top of the distribution. It is unfashionable to say this in individualistic America, but it is as true now as it was for Romeo and Juliette that a marriage binds not only two people, but two families. If you have a good marriage, you will love your spouse. If you love your spouse and then her uninsured mother is diagnosed with cancer, those medical bills will to some perhaps large degree become your liability. More prosaically, if the inlaws can’t keep the heat on, do you wash your hands of it and let them shiver through the winter? In a very unequal society, the costs and risks of “marrying down” are large.

    As with an insurance pool, too much knowledge can poison the marriage pool, and reduce aggregate welfare by preventing distributive arrangements that everyone would rationally prefer in the absence of information, but which become the subject of conflict when information is known in advance. Because the stakes are now very high and the information very solid, good marriage prospects (in a crass socioeconomic sense) hold out for other good marriage prospects. The pool that’s left over, once all the people capable of signaling their membership in the socioeconomic elite have been “creamed” away, may often be, objectively, a bad one. Marriage has a fat lower tail. When you marry, you risk physical abuse, you risk appropriation of your wealth and income, you risk mistreatment of the children you hope someday to have, you risk the Sartre-ish hell of being bound eternally to someone whose company is intolerable. More commonly, you risk forming a household that is unable to get along reasonably in an economic sense, causing conflicts and crises and miseries even among well-intentioned and decent people. It is quite rational to demand a lot of evidence that a potential mate sits well above the fat left tail, but the ex ante uncertainty is always high. When the right-hand side of the desirability distribution is truncated away, marriage may simply be a bad risk.

    If you are at all libertarian, what the behavior of the poor tells you is that it is a bad risk. After all, marriage is not subject to a Bryan-Caplan-esque critique of politics, where people make bad choices in the voting booth that they would not make in the supermarket because they don’t own the costs of a bad vote. The consequences of a decision to marry or not to marry or who to marry are internalized very deeply by the people who make them. Humans, rich and poor, have strong incentives to try to make those choices well. Both common sense, social science, and revealed preference suggest that marriage rates among the poor have declined because the value of the contingent claim upon the future represented by the words “I do” has also declined within the affected population.

    Promoting marriage among this population is not merely ineffective. It is at best ineffective. If the marriage-promoters persuade people to marry despite circumstances that render it likely they will marry poorly, the do-gooders will have done outright harm. Pacific Islanders no doubt bore some cost to build their wooden planes, lashed to a mistaken theory of causality. But lives were not destroyed. Overcoming peoples’ well-founded misgivings about the quality of potential mates with moral exhortations and clipboards of superficial social science might well destroy lives. It would create plenty of success stories for marriage promoters, sure, because even bad bets turn out well now and again. But it would create more tragedies than successes, tragedies that very likely would be blamed on personal deficiencies of the unhappy couple while the successes would be victories for marriage itself in some insane ideological version of the fundamental attribution error.

    Fortunately, people aren’t stupid, so marriage promotion is more likely to be ineffective than devastating. But why go there at all? There is some evidence, for example, that where prevailing social norms prohibit premarital fun stuff and push towards early marriage, people do marry earlier and they marry poorly. Social norms matter, and even smart people are sometimes guided by them to do stupid things. Let’s not reinforce foolish norms.

    None of this is to say marriage is bad! On the contrary, despite my lefty hippie enthusiasm for transgressive goat sex and stuff, I think in the context of the actually existing society, the prevalence of durable marriages is a reflection of social health. Marriage is part of how we organize a good life when a good life is on offer, just like airports with people guiding planes on the tarmac are part of how Pacific Islanders might organize trade for valuable cargo. But before the odd dances on the tarmac must come the production of goods and services for trade, or at least some kind of arrangement with the people in faraway places who control the airplanes. Before you get to smiling families, you have to create the material circumstances that render marriage on average a good deal. For poor women in particular, it very often is no longer a good deal.

    But what about the children? One variant of marriage-centric social theory refrains from pushing marriage so hard, and simply asks that people delay childrearing until the marriage comes. (See e.g. Reihan Salam for some discussion.) If a woman is likely to find a good spouse at a reasonable age, then it might make sense to suggest she delay childbearing until the happy couple is stable and married, since kids reared by married biological parents seem to do better than other kids. Even that is subject to a causality concern: Perhaps childrearing is best performed by the kind of mother capable of finding a good mate, and at a time some unobservable factor renders her both ready to raise a child well and likely to take a husband. This would create a spurious correlation between the presence of biological fathers and good kid outcomes. We can’t rule that out, sure. But we have no reason to think it’s so, and lots of common sense reasons to think a biological father in a stable marriage improves outcomes by contributing to better parenting. So, I’d agree that women likely to find great marriage partners should by all means delay children until they have actually found one.

    But women likely to find great marriage partners already do exactly that. Single motherhood is not a frequent occurrence among women who expect to marry happily and soon. The relevant question is whether we should discourage from having children women who reasonably expect they may not find a good spouse at all, at least not while they are in their youth. That is to say, should we tell women who have been segregated into the bad marriage market, who on average have lowish incomes and unruly neighbors and live near bad schools, that motherhood is just not for them, probably ever? We could bring back norms of shame surrounding single motherhood, or create other kinds of incentives to reduce the nonadoption birth rate of people statistically likely to raise difficult kids. It is possible.

    I think it would be monstrous. I believe that, as a society, we should commit ourselves to creating circumstances in which the fundamentally human experience of parenthood is available to all, not barred from those we’ve left behind on our way to good schools and walkable neighborhoods. Women unlikely to marry who wish to have children by all means should. The shame is ours, not theirs. It belongs to those of us who call ourselves “elite”, who are so proud of our “achievements” that we walk away without a care from the majority of our fellow citizens and fellow humans, from people who in other circumstances, even in the not so distant past, would have been our friends and coworkers, lovers and spouses. It’s on us to join together what we have put asunder.

    Update History:

    • 23-Jan-2014, 12:55 p.m. EEST: “invisible to local spectators” Thanks Noumenon!
    • 24-Jan-2014, 2:25 a.m. EEST: “caused airplanes landing to land on Melanesian islands”, fixed misspelling of Reihan Salam’s name.
    • 24-Jan-2014, 11:35 a.m. EEST: “as though phenomena he Murray characterizes”

    This entry was posted on Wednesday, January 22nd, 2014 at 5:45 pm EST. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

     

     

    Viewing all 9433 articles
    Browse latest View live