Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

No-fly list takes legal hit: Judge rules U.S. government wrongly labeled woman a terrorist - San Jose Mercury News

$
0
0

Comments:"No-fly list takes legal hit: Judge rules U.S. government wrongly labeled woman a terrorist - San Jose Mercury News"

URL:http://www.mercurynews.com/crime-courts/ci_24911422/u-s-government-loses-challenge-no-fly-lists


The federal government violated a former Stanford University doctoral student's legal rights nine years ago when it put her on its secretive "no-fly" lists targeting suspected terrorists, a San Francisco federal judge ruled Tuesday.

In a decision for the most part sealed, U.S. District Judge William Alsup disclosed that Rahinah Ibrahim was mistakenly placed on the controversial list and said that the government must now clear up the mistake. The decision comes in a case that has for the first time revealed how the U.S. Department of Homeland Security assembles the no-fly lists, used to tighten security in the aftermath of the Sept. 11, 2001 terrorist attacks.

The Obama administration has vigorously contested the case, the first of its kind to reach trial, warning that it might reveal top-secret information about the anti-terrorism program. As a result, Alsup sealed his ruling until April to give the government an opportunity to persuade a federal appeals court to keep the order from being released publicly.

But Alsup issued a separate three-page ruling outlining the results for Ibrahim, who has waged a high-profile legal battle since she learned she had been placed on the no-fly list as she tried to board a 2005 flight to Hawaii from San Francisco International Airport.

Ibrahim, Alsup wrote, is "entitled by due process to a ... remedy that requires the government to cleanse and/or correct its lists and records of the mistaken information."

Elizabeth Pipkin, Ibrahim's lawyer, said she hopes the ruling will permit the Malaysian national to again be able to travel to the United States.

"She's entitled to have her name cleared from the system," Pipkin said. "She shouldn't be ensnared in their system anymore."

The 48-year-old Ibrahim has been fighting the U.S. government from abroad, denied the right since 2005 to return to this country. Her case went to trial in December before Alsup, who heard the allegations of government wrongdoing without a jury, in part because of the government's assertion of national security privilege.

Ibrahim, an architecture scholar, wearing a traditional Muslim hijab, was arrested at SFO in January 2005 as she headed to a conference in Hawaii with her teen daughter. She had been branded a terrorist suspect in government databases, she learned. Ibrahim denies any connection to terrorist organizations and settled a separate legal case against San Francisco police and others linked to her airport arrest for $225,000.

Before the incident, Ibrahim had been a regular traveler to the United States since the early 1980s, calling it her "second home." She met her husband here, marrying in Seattle in 1986, and her first child is a U.S. citizen.

After returning to Malaysia, she founded the architecture department at a major Malaysian university but returned to the states in 2000 to secure her doctorate from Stanford. As a result of later being put on the no-fly list, Ibrahim had to complete her Stanford doctorate remotely.

But she pressed her legal case for years, even as the government tried to sidetrack the legal claims. The 9th U.S. Circuit Court of Appeals twice allowed her case to proceed over the government's objections, leading to the recent trial.

Justice Department officials could not be reached for comment on Alsup's order.

The government places thousands of people on the lists each year, and similar lawsuits against the program and its methods have been unfolding in other courts around the country, including a major challenge in Oregon brought by the American Civil Liberties Union.

Howard Mintz covers legal affairs. Contact him at 408-286-0236 or follow him at Twitter.com/hmintz


Why Are There So Many Pythons? From Bytecode to JIT (with code) | Toptal

$
0
0

Comments:"Why Are There So Many Pythons? From Bytecode to JIT (with code) | Toptal"

URL:http://www.toptal.com/python/why-are-there-so-many-pythons


Python is amazing.

Surprisingly, that’s a fairly ambiguous statement. What do I mean by ‘Python’? Do I mean Python the abstract interface? Do I mean CPython, the common Python implementation (and not to be confused with the similarly named Cython)? Or do I mean something else entirely? Maybe I’m obliquely referring to Jython, or IronPython, or PyPy. Or maybe I’ve really gone off the deep end and I’m talking about RPython or RubyPython (which are very, very different things).

While the technologies mentioned above are commonly-named and commonly-referenced, some of them serve completely different purposes (or, at least, operate in completely different ways).

Throughout my time working with Python, I’ve run across tons of these .*ython tools. But not until recently did I take the time to understand what they are, how they work, and why they’re necessary (in their own ways).

In this post, I’ll start from scratch and move through the various Python implementations, concluding with a thorough introduction to PyPy, which I believe is the future of the language.

It all starts with an understanding of what ‘Python’ actually is.

If you have a good understanding for machine code, virtual machines, and the like, feel free to skip ahead.

“Is Python interpreted or compiled?”

This is a common point of confusion for Python beginners.

The first thing to realize is that ‘Python’ is an interface. There’s a specification of what Python should do and how it should behave (as with any interface). And there are multiple implementations (as with any interface).

The second thing to realize is that ‘interpreted’ and ‘compiled’ are properties of an implementation, not an interface.

So the question itself isn’t really well-formed.

Is Python interpreted or compiled? The question isn't really well-formed.

That said, for the most common implementation (CPython: written in C, often referred to as simply ‘Python’, and surely what you’re using if you have no idea what I’m talking about), the answer is: interpreted, with some compilation. CPython compiles* Python source code to bytecode, and then interprets this bytecode, executing it as it goes.

* Note: this isn’t ‘compilation’ in the traditional sense of the word. Typically, we’d say that ‘compilation’ is taking a high-level language and converting it to machine code. But it is a ‘compilation’ of sorts.

Let’s look at that answer more closely, as it will help us understand some of the concepts that come up later in the post.

Bytecode vs. Machine Code

It’s very important to understand the difference between bytecode and machine (or native) code, perhaps best illustrated by example:

  • C compiles to machine code, which is then run directly on your processor. Each instruction instructs your CPU to move stuff around.
  • Java compiles to bytecode, which is then run on the Java Virtual Machine (JVM), an abstraction of a computer that executes programs. Each instruction is then handled by the JVM, which interacts with your computer.

In very brief terms: machine code is much faster, but bytecode is more portable and secure.

Machine code looks different depending on your machine, but bytecode looks the same on all machines. One might say that machine code is optimized to your setup.

Returning to CPython, the toolchain process is as follows:

CPython compiles your Python source code into bytecode. That bytecode is then executed on the CPython Virtual Machine.

Beginners often assume Python is compiled because of .pyc files. There's some truth to that: the .pyc file is the compiled bytecode, which is then interpreted. So if you've run your Python code before and have the .pyc file handy, it will run faster the second time, as it doesn't have to re-compile the bytecode.

Alternative VMs: Jython, IronPython, and More

As I mentioned earlier, Python has several implementations. Again, as mentioned earlier, the most common is CPython. This a Python implementation written in C and considered the ‘default’ implementation.

But what about the alternatives? One of the more prominent is Jython, a Python implementation written Java that utilizes the JVM. While CPython produces bytecode to run on the CPython VM, Jython produces Java bytecode to run on the JVM (this is the same stuff that’s produced when you compile a Java program).

“Why would you ever use an alternative implementation?”, you might ask. Well, for one, these different implementations play nicely with different technology stacks.

CPython makes it very easy to write C-extensions for your Python code because in the end it is executed by a C interpreter. Jython, on the other hand, makes it very easy to work with other Java programs: you can import any Java classes with no additional effort, summoning up and utilizing your Java classes from within your Jython programs. (Aside: if you haven’t thought about it closely, this is actually nuts. We’re at the point where you can mix and mash different languages and compile them all down to the same substance. (As mentioned by Rostin, programs that mix Fortran and C code have been around for a while. So, of course, this isn’t necessarily new. But it’s still cool.))

As an example, this is valid Jython code:

[Java HotSpot(TM) 64-Bit Server VM (Apple Inc.)] on java1.6.0_51>>> from java.util import HashSet>>> s = HashSet(5)>>> s.add("Foo")>>> s.add("Bar")>>> s
[Foo, Bar]

IronPython is another popular Python implementation, written entirely in C# and targeting the .NET stack. In particular, it runs on what you might call the .NET Virtual Machine, Microsoft’s Common Language Runtime (CLR), comparable to the JVM.

You might say that Jython : Java :: IronPython : C#. They run on the same respective VMs, you can import C# classes from your IronPython code and Java classes from your Jython code, etc.

It’s totally possible to survive without ever touching a non-CPython Python implementation. But there are advantages to be had from switching, most of which are dependent on your technology stack. Using a lot of JVM-based languages? Jython might be for you. All about the .NET stack? Maybe you should try IronPython (and maybe you already have).

By the way: while this wouldn’t be a reason to use a different implementation, note that these implementations do actually differ in behavior beyond how they treat your Python source code. However, these differences are typically minor, and dissolve or emerge over time as these implementations are under active development. For example, IronPython uses Unicode strings by default; CPython, however, defaults to ASCII for versions 2.x (failing with a UnicodeEncodeError for non-ASCII characters), but does support Unicode strings by default for 3.x.

Just-in-Time Compilation: PyPy, and the Future

So we have a Python implementation written in C, one in Java, and one in C#. The next logical step: a Python implementation written in… Python. (The educated reader will note that this is slightly misleading.)

Here’s where things might get confusing. First, lets discuss just-in-time (JIT) compilation.

JIT: The Why and How

Recall that native machine code is much faster than bytecode. Well, what if we could compile some of our bytecode and then run it as native code? We’d have to pay some price to compile the bytecode (i.e., time), but if the end result was faster, that’d be great! This is the motivation of JIT compilation, a hybrid technique that mixes the benefits of interpreters and compilers. In basic terms, JIT wants to utilize compilation to speed up an interpreted system.

For example, a common approach taken by JITs:

Identify bytecode that is executed frequently. Compile it down to native machine code. Cache the result. Whenever the same bytecode is set to be run, instead grab the pre-compiled machine code and reap the benefits (i.e., speed boosts).

This is what PyPy is all about: bringing JIT to Python (see the Appendix for previous efforts). There are, of course, other goals: PyPy aims to be cross-platform, memory-light, and stackless-supportive. But JIT is really its selling point. As an average over a bunch of time tests, it’s said to improve performance by a factor of 6.27. For a breakdown, see this chart from the PyPy Speed Center:

PyPy is Hard to Understand

PyPy has huge potential, and at this point it’s highly compatible with CPython (so it can run Flask, Django, etc.).

But there’s a lot of confusion around PyPy (see, for example, this nonsensical proposal to create a PyPyPy…). In my opinion, that’s primarily because PyPy is actually two things:

A Python interpreter written in RPython (not Python (I lied before)). RPython is a subset of Python with static typing. In Python, it’s “mostly impossible” to reason rigorously about types (Why is it so hard? Well consider the fact that: x = random.choice([1, "foo"]) would be valid Python code (credit to Ademan). What is the type of x? How can we reason about types of variables when the types aren’t even strictly enforced?). With RPython, you sacrifice some flexibility, but instead make it much, much easier to reason about memory management and whatnot, which allows for optimizations. A compiler that compiles RPython code for various targets and adds in JIT. The default platform is C, i.e., an RPython-to-C compiler, but you could also target the JVM and others.

Solely for clarity, I’ll refer to these as PyPy (1) and PyPy (2).

Why would you need these two things, and why under the same roof? Think of it this way: PyPy (1) is an interpreter written in RPython. So it takes in the user’s Python code and compiles it down to bytecode. But the interpreter itself (written in RPython) must be interpreted by another Python implementation in order to run, right?

Well, we could just use CPython to run the interpreter. But that wouldn’t be very fast.

Instead, the idea is that we use PyPy (2) (referred to as the RPython Toolchain) to compile PyPy’s interpreter down to code for another platform (e.g., C, JVM, or CLI) to run on our machine, adding in JIT as well. It’s magical: PyPy dynamically adds JIT to an interpreter, generating its own compiler! (Again, this is nuts: we’re compiling an interpreter, adding in another separate, standalone compiler.)

In the end, the result is a standalone executable that interprets Python source code and exploits JIT optimizations. Which is just what we wanted! It’s a mouthful, but maybe this diagram will help:

To reiterate, the real beauty of PyPy is that we could write ourselves a bunch of different Python interpreters in RPython without worrying about JIT (barring a few hints). PyPy would then implement JIT for us using the RPython Toolchain/PyPy (2).

In fact, if we get even more abstract, you could theoretically write an interpreter for any language, feed it to PyPy, and get a JIT for that language. This is because PyPy focuses on optimizing the actual interpreter, rather than the details of the language it’s interpreting.

You could theoretically write an interpreter for any language, feed it to PyPy, and get a JIT for that language.

As a brief digression, I’d like to mention that the JIT itself is absolutely fascinating. It uses a technique called tracing, which executes as follows:

Run the interpreter and interpret everything (adding in no JIT). Do some light profiling of the interpreted code. Identify operations you’ve performed before. Compile these bits of code down to machine code.

For more, this paper is highly accessible and very interesting.

To wrap up: we use PyPy’s RPython-to-C (or other target platform) compiler to compile PyPy’s RPython-implemented interpreter.

Wrapping Up

Why is this so great? Why is this crazy idea worth pursuing? I think Alex Gaynor put it well on his blog: “[PyPy is the future] because [it] offers better speed, more flexibility, and is a better platform for Python’s growth.”

In short:

  • It’s fast because it compiles source code to native code (using JIT).
  • It’s flexible because it adds the JIT to your interpreter with very little additional work.
  • It’s flexible (again) because you can write your interpreters in RPython, which is easier to extend than, say, C (in fact, it’s so easy that there’s a tutorial for writing your own interpreters).

Appendix: Other Names You May Have Heard

  • Python 3000 (Py3k): an alternative naming for Python 3.0, a major, backwards-incompatible Python release that hit the stage in 2008. The Py3k team predicted that it would take about five years for this new version to be fully adopted. And while most (warning: anecdotal claim) Python developers continue to use Python 2.x, people are increasingly conscious of Py3k.

  • Cython: a superset of Python that includes bindings to call C functions.
    • Goal: allow you to write C extensions for your Python code.
    • Also lets you add static typing to your existing Python code, allowing it to be compiled and reach C-like performance.
    • This is similar to PyPy, but not the same. In this case, you’re enforcing typing in the user’s code before passing it to a compiler. With PyPy, you write plain old Python, and the compiler handles any optimizations.
  • Numba: a “just-in-time specializing compiler” that adds JIT to annotated Python code. In the most basic terms, you give it some hints, and it speeds up portions of your code. Numba comes as part of the Anaconda distribution, a set of packages for data analysis and management.

  • IPython: very different from anything else discussed. A computing environment for Python. Interactive with support for GUI toolkits and browser experience, etc.

  • Psyco: a Python extension module, and one of the early Python JIT efforts. However, it’s since been marked as “unmaintained and dead”. In fact, the lead developer of Psyco, Armin Rigo, now works on PyPy.

Language Bindings

  • RubyPython: a bridge between the Ruby and Python VMs. Allows you to embed Python code into your Ruby code. You define where the Python starts and stops, and RubyPython marshals the data between the VMs.

  • PyObjc: language-bindings between Python and Objective-C, acting as a bridge between them. Practically, that means you can utilize Objective-C libraries (including everything you need to create OS X applications) from your Python code, and Python modules from your Objective-C code. In this case, it’s convenient that CPython is written in C, which is a subset of Objective-C.

  • PyQt: while PyObjc gives you binding for the OS X GUI components, PyQt does the same for the Qt application framework, letting you create rich graphic interfaces, access SQL databases, etc. Another tool aimed at bringing Python’s simplicity to other frameworks.

JavaScript Frameworks

  • pyjs (Pyjamas): a framework for creating web and desktop applications in Python. Includes a Python-to-JavaScript compiler, a widget set, and some more tools.

  • Brython: a Python VM written in JavaScript to allow for Py3k code to be executed in the browser.

N.S.A. Devises Radio Pathway Into Computers

The Periodic Table of Rust Types

$
0
0

Comments:"The Periodic Table of Rust Types"

URL:http://cosmic.mearie.org/2014/01/periodic-table-of-rust-types/


Immutable Reference Mutable Reference Owned Raw *T Immutable raw pointer *mut T Mutable raw pointer Raw pointers do not have ownership. Bare Unsized Simple&'r T Immutable borrowed pointer&'r mut T Mutable borrowed pointer ~T Owned pointer T Primitive type, struct, enum and so on Vector&'r [T] Immutable borrowed vector slice&'r mut [T] Mutable borrowed vector slice ~[T] Owned vector [T, ..n] Fixed-size vector [T] Unsized vector type String&'r str Immutable borrowed string slice&'r mut str Mutable borrowed string slice ~str Owned string Strings have to be allocated to some other storage. str Unsized string type Trait&'r Trait:K Immutable borrowed trait object&'r mut Trait:K Mutable borrowed trait object ~Trait:K Owned trait object Traits have to be resolved either at compile time or at runtime. Trait Unsized trait type Call-many (no shorthand)&'r Fn<T…, U>:K Closure with immutable environment 'r |T…|:K -> U&'r mut Fn<T…, U>:K Closure with mutable environment (no shorthand)~Fn<T…, U>:K Closure with owned environment extern "ABI" fn(T…) -> U Bare function type Call-once (no shorthand)&'r OnceFn<T…, U>:K Procedure with immutable environment (no shorthand)&'r mut OnceFn<T…, U>:K Procedure with mutable environment proc(T…) -> U~OnceFn<T…, U>:K Procedure with owned environment Bare functions always can be called multiple times.
'rLifetime:KTrait boundsT…Function arguments-> UFunction returnextern "ABI"ABI definition

What is this?

This "periodic table" is a cheatsheet for various Rust types. Rust programming language has a versatile type system that avoids a certain class of memory error in the safe context, and consequently has somewhat many types at the first glance. This table organizes them into an orthogonal tabular form, making them easier to understand and reason. I also hope that this makes very obvious why Rust needs such seemingly many types.

The periodic table was made by Kang Seonghoon as a thought experiment, and then... it have got redditted unexpectedly :p Henceforth I've made the URL permanent and added some descriptions. Update (2014-01-15 08:30 UTC): I've updated the table slightly to include a missing fixed-size vector [T, ..n], so it has a new "unsized" column now.

Discussion: /r/rust, /r/programming, Hacker News

Guide

Columns indicate the ownership. There are two big groups from left to right: indirect (i.e. referenced or owned) and direct. Particularly indirect types can be coerced horizontally: ~T can be coerced to &mut T or &T and &mut T can be coerced to &T.

Rows indicate the different kind of types. There are three big groups from top to bottom: unsafe dereference, safe dereference and callable.

Colored backgrounds (sorry the accessibility!) indicate the current availability. Black background means the type is plain absurd and prohibited. Gray background means the type makes some sense but it is not yet in the language. Fortunately we have two proposals that cover all the missing types now:

  • Dynamically sized types proposal (a.k.a. DST) brings unsized types to the language. Normally most types including references and owned pointers have their size known and thus are "sized", but [T], str and Trait do not have their size known in the compile time. The proposal allows them in the limited context, and that is primarily useful for fully supporting custom smart pointers. This also has a side effect that allows for &mut str, though it won't see much use since safe strings cannot be modified via indexing.
  • Variadic generics proposal brings a variadic number of generic parameters, similar to that of C++11. Consequently it can turn closures and procedures into simple traits. The exact interface is not yet settled (one of the possible interface discussed is indicated below in the small print) so it can look differently in the future.

There are some optional syntactic parts possible in types. Many of them are turned off by default since they are normally verbose, but you can turn them on if you want.

Copyright © 2014, Kang Seonghoon. This work is licensed under a Creative Commons Attribution 4.0 International License.

a part of cosmic.mearie.org.

Mother • Sen.se

Rails Consulting for Fun and Profit - Hi, I'm Josh Symonds

$
0
0

Comments:"Rails Consulting for Fun and Profit - Hi, I'm Josh Symonds"

URL:http://joshsymonds.com/blog/2014/01/14/rails-consulting-for-fun-and-profit/


2013 was a great year for me specifically, and my web development shop (Symonds & Son) in general. Though I initially fell into consulting accidentally, I’ve aggressively parlayed it into a successful business — and my only regret has not been doing so sooner! A lot of developers I know are on the fence about striking out on their own. I’m going to lay out how 2013 changed me from a full-time employee to owner of my own business, and in doing so hopefully persuade a few people that the benefits to being in business for yourself far outweigh the risks.

Becoming a Mercenary

At the start of 2013, I was a salaried employee working at a startup. The rate was fine, but I spent a lot of time working — long hours in the evening to make aggressive sprints, and many meetings during the day to discuss development priorities and investor relations. I wasn’t happy, but it was a job, and I was satisfied with it.

A few months into my tenure, I was offered a salary adjustment to help make my company’s bottom-line more attractive. Instead of taking it, I proposed an alternative arrangement: I would become a consultant at a rate very similar to my old hourly and I’d work part-time. After some negotiation, a deal was struck, and I was officially a free agent.

But I wasn’t happy — indeed, quite the opposite. I spent weeks beforehand freaking out. I had the extra hours available to make up my cut income, but I’d need clients to actually pay me for that time. Otherwise I’d need to find another salaried position, and fast, since I didn’t have much in the way of savings. Disaster was looming, and I spent sleepless nights trying to figure out how I’d find a client, how I’d convince them to actually pay me, and, if that failed, how I’d explain a gap on my resume to future potential employers.

Yet almost immediately upon taking the plunge, an old client wanted me to do new work for them. They recommended me to another company, who told one of their clients about me, and very shortly after I became part-time, I was full-time working for my clients. It took me almost all of 2013 to understand what had happened.

Charge Them and They’ll Thank You

In case you didn’t know, Rails developers — specifically, good Rails developers with experience in modern tools, an interest in improving themselves, and an aggressive talent for development — are extremely hard to find. In fact, finding developers who are capable of programming in their chosen language at all is a challenge.

But why accept a generalized statement when I can give you a specific example? I’m presently working with a client to vet Rails engineers. The rate they’re willing to pay is really quite good, yet the candidates their recruiter finds are just terrible. This might be a topic for a separate post, but of the dozen people I’ve interviewed:

  • One didn’t know what Rubygems were,
  • One didn’t know what ActiveRecord was,
  • One had no idea how to sort an array in Ruby (and was surprised when I pointed out Array#sort),
  • and one knew all this but had the interpersonal skills of a serial killer. The creepy kind, not the mesmerizing kind.

Yet the recruiter says these people are snatched up all the time, at a rate of roughly $100 an hour. I honestly have absolutely no idea how this happens. I’m not exaggerating even a little bit when I say I believe that none of these people can code at all — they have no GitHub profiles, no code samples, absolutely nothing to their names.

(Incidentally, can you do better than these idiot candidates? Let me know, I have a pile of money with your name on it.)

I’ve come to believe that these people are why I’ve succeeded. If you bring dedication, honesty, and actual, real skill to your clients, they will recognize your contributions, keep coming back to you, and tell all their friends about you. And they’re willing to pay your hourly rate for long nights, excessive meetings, and even just listening to their plans and helping them improve their processes.

But if that’s so, then why do most engineers, even the good ones, stay put at their full-time, salaried positions?

Success, Outside the Bubble

I think a lot of it has to do with the startup culture in San Francisco presently. Weirdly, I think it has the effect of keeping engineer salaries artificially lowered.

Our industry is dominated by talk of the tech bubble: all the press is about acquisitions, huge seed funding rounds, and successful entrepreneurs’ new projects. But the amount of money in Silicon Valley is really quite limited. There are enormous industries out there that need skilled programmers but lack the sex appeal of a startup or coverage in TechCrunch — yet they have applications in Rails and backend infrastructure needs too.

And they also have way, way more money. Most of my clients are not extremely large businesses in their fields, but the amount of capital they have dwarfs that of even established startups. And they actually have business models that have worked for them for many years, so I worry less about revenue stream issues (or, heaven forbid, them folding overnight).

Even better, to these companies, you aren’t just an engineer with a set salary: you’re solving a business problem with software. Your value to them is measured in the millions of dollars you saved their company, not the amount they’re expecting to pay to a Rails engineer. And by charging on the former, not the latter, you can turn a very tidy profit indeed.

When You Try to Fly, Sometimes You Fall

Of course, the process of getting my business up and running hasn’t been all sunshine and roses. I’ve made some mistakes and wished I’d handled a few things differently.

I nickled-and-dimed a client on change requests, alienating that client and making myself appear less professional. Said client did not have a whole lot of money, and while the initial contract amount was commensurately very low, she really didn’t appreciate me charging additional for some very minor changes. I should have just sucked it up and done the work, leaving both of us with warm fuzzes in the end, even if I took a slight loss on the contract. Most of my clients hear about me from other satisfied clients, and I would have been better served by her loving me than making a little more money. For pricing my services, I need to start high and work my way down. I generally start client conversations on my hourly rate at what I would consider a reasonable ultimate number, and then allow myself to be driven down from there — generally because the client wants a long-term contract and expects to save on my hourly based on the length of the engagement. More projects, less hourly. When starting as a consultant, I was really selling only my hours. Now Symonds & Son is a business in its own right, and I’ve hired designers and developers to help with my workload. Working with other talented individuals makes much more sense on a project basis, where I can package their (and my) hours together.

I’m Sold: How Do I Do This?

“Shut up already Josh, I think this is a great idea and want to become a consultant too! What’s next?”

Make sure you have a good track record and established public credentials. Verify your friends (and your ex-employers) will vouch for the quality of your code and the quality of you as an individual. Go to meetups, write blog posts, have open-sourced code on GitHub — the more stuff you have on record, the better. Your clients will want to know everything about you they can before they even meet you.

Have a backup plan in case everything goes wrong. Mine was “find another full-time Rails job,” something that I’ve never traditionally had a problem finding. At least think about a safety net so that taking the plunge is less scary.

And consider if you really want to be a mercenary. Many talented coders I know work for peanuts, but they do so for non-profits, amazing startups, and benefit corporations. They don’t care that they’re not taking home tons of money; they are making a difference in the world, which matters more to them than any paycheck ever could.

But if the idea of consulting appeals to you, then I encourage you to take the plunge. As engineers, our services are as in-demand as ever; if you are a competent engineer, you can turn your skills from a salary into a solution, and companies pay much more handsomely for the latter. And if you try to fly and fail, you have a good chance of landing at another job anyway — so really, the risk is pretty minimal. And if you take off and soar away, let me know: I always need exciting new companies to work on big contracts with!

The Great Firewall of Yale - CourseTable

veltman/principles · GitHub

$
0
0

Comments:"veltman/principles · GitHub"

URL:https://github.com/veltman/principles


This a list-in-progress of things I try to keep in mind when working on web projects. Some are matters of logic, others are matters of personal taste.

I manage to abide by some of these things some of the time.

  • The best way to make something durable and flexible is not to get too fancy in the first place. You only have to fix the web to the extent that you break it first.

  • Make sure states of your app that you want to be shareable have unique URLs. Use hashes if necessary. Assume that someone will copy the URL directly from the address bar, not from your "Share this!" widget.

  • You need a very good reason to have more than two fonts per page.

  • Don't use tiny font sizes, and be generous with line spacing for body text.

  • Don't make tiny click targets. Assume that someone will be mashing that "X" icon with a fingertip, not a cursor.

  • Redundancy is a useful design technique. Labels+icons, color+width, etc.

  • Embrace vertical scrolling, don't break it. Scroll-based animation is annoying. Stop turning the web into a popup book.

  • Don't rely heavily on hovers to make something interesting. Even if most of them have a mouse, requiring your users to go on a scavenger hunt is obnoxious.

  • If an input is going to be numeric, use the "number" input type so mobile devices can show the appropriate keyboard.

  • Use loading indicators for XHR requests, even if they're likely to be very fast. You never know how slow or broken it might be for a user. They should know if something is missing.

  • In mobile browsers, the scrollbar is often subtle or invisible. Check where your page gets cut off at those sizes and make sure it's apparent that there's more below the fold.

  • Don't break browser zoom (things like full-window maps are an exception).

  • Try not to rely heavily on audio (this includes video with voiceover). Lots of users will be in public, and even if they have headphones, requiring them to get them out and/or put them on is a big barrier.

  • Text should be text, not text in an image.

  • Make it clear that clickable things are clickable. They should have cursor: pointer, have hover states, and if they're text, they should be distinct from other text.

  • Avoid lightbox modals if possible.

  • Don't make your location-based app require a user's location. Be prepared for them to say no. More generally, if you have a very personalized app, think about what interesting things you can show someone who doesn't want to get personal.

  • Don't give someone 20 equally interesting things to do right off the bat. Give them a more focused presentation upfront before turning them loose.

  • Small multiples reflow easily.

  • Web design is not a contest to see who can have the fewest empty pixels. Whitespace is a valuable asset for focusing a user's attention. Let things breathe.

  • Limit the maximum width of text blocks to something like 900px. Anything wider becomes hard to scan.

  • Don't use Flash.

  • Assume that people won't read the instructions.

  • Put legends as close as possible to the chart content they describe.

  • Label your axes and show your units.

  • Get live data into your visualization early. If you can't, use historical data or something else a little bit representative. Visualizing random test data will lead you astray.

  • When showing the change in a value over time, you need a good reason not to use an area chart or line chart.

  • When comparing a value for multiple categories, you need a good reason not to use a bar or column chart.

  • When comparing two variables across a set of data, you need a good reason not to use a scatterplot.

  • When showing a distribution, you need a good reason not to use a histogram/distribution curve.

  • Don't make 3D charts.

  • Don't muck around with the y-axis. Be mindful of scale.

  • Don't expect people to tell subtle differences in scale for bubble size or opacity.

  • Don't use more than three or four colors in a categorical scheme. If you have more categories than that, you probably need to use something besides color to differentiate.

  • Be mindful of color blindness when picking combinations. Use something like Colorbrewer to pick your scales.

  • Don't make slideshows/lists without a "view all" option. Better yet, make it "view all" from the start with vertical scrolling instead of requiring a dozen clicks.

  • Transitions are fun the first time and then usually annoying. Try not to use them unless you're actually trying to show persistent objects in transition.

  • Use Leaflet for maps.

  • Don't make population maps.

  • For world maps, consider using a Robinson projection. For US maps, consider using an Albers projection. Whatever the map, be aware of the distortions of your chosen projection.

  • Specify a charset, presumably utf-8.

  • Specify a doctype, presumably <!DOCTYPE html> (I don't know of a good reason to use any other type).

  • Include descriptive social media <meta> tags.

  • Put site scripts at the end of the <body> tag.

  • If something is going to be rendered the same way every time with JavaScript, it should become static.

  • Concatenate and minify scripts to minimize page size and number of requests.

  • Use descriptive subfolders for resources (css/, js/, images/).

  • Include version numbers in the filenames of JS libraries.

  • Cache jQuery and D3 selectors that are going to be reused:

  • Clean and transform your raw data stepwise. Make it a repeatable process. Use Makefiles or shell scripts if you can.

  • Give data files descriptive names. geocoded-20131206.tsv, not data.tsv.

  • Link to your data sources in the final presentation. Explain your methodology, and especially explain the limits and shortcomings of your analysis.

  • Don't use really complex regular expressions if you can help it. Use multiple steps instead, you'll be less likely to screw up.

  • Round coordinates (quantize) and simplify geodata files to save space as appropriate. You don't need data to the inch to make a world map.

  • Store data as JSON or TSV.

  • Work with simple text files when possible. Avoid the overhead of a database unless your data really demands it.

  • Make sure you know the way(s) that a dataset represents unavailable values. It could be a blank space, it could be a dash, it could be an asterisk. It could be multiple things. Some places will use 999 or 99.999 to mean a number that's not available.

  • Cache pages while you scrape them so that you don't have to start over if you were scraping the wrong thing.

  • Beware the four C's of working with text data: character encoding, capitalization, curly quotes, and cwhitespace.

  • Make things static whenever possible. If they can't be static files, cache routes with something like Varnish.

  • Cool URLs don't change.

  • Make URLs short, descriptive, and lowercase.

  • Use http://domain.com/, not www.domain.com - but configure www. to redirect to the former.

  • Flush POST data so it can't be resubmitted with a refresh.

  • Try very hard not to rely directly on external APIs. Use an in-between layer so that if the API goes down, your site is just stale instead of broken.

  • Assume your page will be one of user's dozen open tabs. Use short, descriptive page titles and a favicon.

  • Minify images as much as possible, and use the appropriate format. You don't need rich, lossless, giant files for tiny icons or thumbnails.

  • Don't store credentials in script files. Use environment variables.

  • Have a recovery plan, and test it.

  • Keep detailed logs, and rotate the logfiles.

  • Keep track of your server's dependencies and jobs such that you can quickly rebuild it from scratch. Have a clean AMI or equivalent.

  • If you use AWS, have at least one non-AWS fallback.

  • Use a staging server.

  • Don't rely on browser sniffing. Detect support for relevant features instead. And if you are doing too much of that, you probably got too fancy (see bullet #1).

  • If you're using a framework, make sure verbose error messages are off in production.

  • Be open about uncertainty around a project's eventual form. Build with escape routes and next-best options in mind in case you hit a dead end or the data doesn't cooperate.

  • Form a team around a project at the beginning. Everyone who's going to be building something should be involved in the formative discussions. Have them physically sit together as a team if possible.

  • Build in sufficient testing time. Guerrilla test early with people who aren't involved in the project, and don't give your testers a bunch of prefatory instructions before turning them loose (your real users won't get any).

  • Don't spend too much time designing static mockups of unstatic things. Go from paper to code as quickly as possible. Graphics software is for graphics.

  • Deciding how much to care about different types of users requires knowing your users in the first place. Understand who they are, what they want, what browsers they use, what devices they use, etc. Make that the starting point when deciding what level of resources to devote to addressing different scenarios.

  • Set time aside to write good documentation. Be specific about dependencies and installation. Use real-world scenarios and variable names in your examples, no foo and bar gibberish.


  • legal - Why are you now required NOT to smile in passport photos? - Travel Stack Exchange

    $
    0
    0

    Comments:"legal - Why are you now required NOT to smile in passport photos? - Travel Stack Exchange"

    URL:http://travel.stackexchange.com/q/11534/101


    The contents of the main page in a passport is dictated by standards set by the International Civil Aviation Organization (ICAO), specifically the Machine Readable Travel Documents standard (Doc 9303).

    This document states that all passport photos should meet the following requirements :

    1. Pose
    1.1. The photograph should be less than six months old.
    1.2. It should show a close up of the head and shoulders.
    1.3. The photograph should be taken so that an imaginary horizontal line between the centres of the eyes is parallel to the top edge of the picture.
    1.4. The face should be in sharp focus and clear with no blemishes such as ink marks, pen, pin, paper clip, staples, folds, dents, or creases.
    1.5. The photograph should show the subject facing square on and looking directly at the camera with a neutral expression and the mouth closed.
    1.6. The chin to crown (crown is the position of the top of the head if there were no hair) shall be 70 -80% of the vertical height of the picture.
    1.7. The eyes must be open and there must be no hair obscuring them.
    1.8. If the subject wears glasses, the photograph must show the eyes clearly with no lights reflected in the glasses. The glasses shall not have tinted lenses. Avoid heavy frames if possible and ensure that the frames do not cover any part of the eyes. Sunglasses cannot be worn or appear on the person’s head.
    1.9. Coverings, hair, headdress, hats, scarfs, head bands, bandanas or facial ornamentation which obscure the face, are not permitted (except for religious or medical reasons. In all cases, the person’s full facial features from bottom of chin to top of forehead and both edges of the face must be clearly visible).
    1.10. The photograph must have a plain light coloured background.
    1.11. There must be no other people, chair back, or objects in the photograph.
    2. Lighting, Exposure, and Colour Balance
    2.1 The lighting must be uniform with no shadows or reflections on the face, eye-glasses or in the background.
    2.2 The subject’s eyes must not show red eye.
    2.3 The photograph must have appropriate brightness and contrast.
    2.4 Where the picture is in colour, the lighting, and photographic process must be colour balanced to render skin tones faithfully.
    3. Submission of Portrait to the Issuing Authority
    Where the portrait is supplied to the Issuing Authority in the form of a print, the photograph, whether produced using conventional photographic or digital techniques, should be on good or photo-quality paper.
    4. Compliance with International Standards
    4.1 The photograph shall comply with the appropriate definitions set out in ISO/IEC 1974 – 5.
    

    The last of the points above is probably the most important as far as "why" - ISO/IEC 19794 defines standards for Biometric data interchange formats, with part 5 specifically being Face image data. According to the ISO/IEC 19794 documents :

    To enable many applications on variety of devices, including devices that have the limited resources required for data storage, and to improve face recognition accuracy, this part of ISO/IEC 19794 specifies not only a data format, but also scene constraints (lighting, pose, expression etc), photographic properties (positioning, camera focus etc), digital image attributes (image resolution, image size etc).

    Article 11

    stratēchery by Ben Thompson | Strategy. Technology. Pronunciation.

    speaking.io

    USA ISP Speed Index Results| Netflix ISP Speed Index

    Google Chrome Blog: Everyone can now track down noisy tabs

    AnandTech Portal | AMD Kaveri Review: A8-7600 and A10-7850K Tested

    $
    0
    0

    Comments:"AnandTech Portal | AMD Kaveri Review: A8-7600 and A10-7850K Tested"

    URL:http://www.anandtech.com/show/7677/amd-kaveri-review-a8-7600-a10-7850k


    The first major component launch of 2014 falls at the feet of AMD and the next iteration of its APU platform, Kaveri. Kaveri has been the aim for AMD for several years, it's actually the whole reason the company bought ATI back in 2006. As a result many different prongs of AMD’s platform come together: HSA, hUMA, offloading compute, unifying GPU architectures, developing a software ecosystem around HSA and a scalable architecture. This is, on paper at least, a strong indicator of where the PC processor market is heading in the mainstream segment. For our Kaveri review today we were sampled the 45/65W (cTDP) A8-7600 and 95W A10-7850K Kaveri models. The A10-7850K is available today while the A8 part will be available later in Q1.

    The Kaveri Overview

    To almost all users, including myself up until a few days ago, Kaveri is just another iteration of AMD’s APU line up that focuses purely on the integrated graphics side of things, while slowly improving the CPU side back to Thuban levels of performance. Up until a few days ago I thought this too, but Kaveri is aiming much higher than this.

    Due to the way AMD updates its CPU line, using the ‘tick-tock’ analogy might not be appropriate. Kaveri is AMD’s 3rd generation Bulldozer architecture on a half-node process shrink. Kaveri moves from Global Foundries' 32nm High-K Metal Gate SOI process to its bulk 28nm SHP (Super High Performance) process. The process node shift actually explains a lot about Kaveri's targeting. While the 32nm SOI process was optimized for CPU designs at high frequency, GF's bulk 28nm SHP process is more optimized for density with a frequency tradeoff. AMD refers to this as an "APU optimized" process, somewhere in between what a CPU and what a GPU needs. The result is Kaveri is really built to run at lower frequencies than Trinity/Richland, but is far more dense.

    Kaveri is the launch vehicle for AMD's Steamroller CPU architecture, the 3rd iteration of the Bulldozer family (and second to last before moving away from the architectural detour). While Piledriver (Trinity/Richland) brought Bulldozer's power consumption down to more rational levels, Steamroller increases IPC. AMD uses Steamroller's IPC increase to offset the frequency penalty of moving to 28nm SHP. AMD then uses the density advantage to outfit the design with a substantially more complex GPU. In many senses, Kaveri is the embodiment of what AMD has been preaching all along: bringing balance to the CPU/GPU split inside mainstream PCs. The strategy makes a lot of sense if you care about significant generational performance scaling, it's just unfortunate that AMD has to do it with a CPU architecture that puts it at a competitive deficit.

    The die of Kaveri is of similar size to Richland (245mm2 vs 236mm2) but has 85% more transistors (2.41B vs. 1.3B). Unfortunately AMD hasn't confirmed whether we are talking about layout or schematic transistors here, or even if both figures are counted the same way, but there's clearly some increase in density. Typically a move from 32nm to 28nm should give a 26% boost for the same area, not an 85% boost.

    The GPU side of the equation is moving from a Cayman derived GPU in Richland to a Hawaii / GCN based one in Kaveri with the addition of HSA support. This vertically integrates the GPU stack to GCN, allowing any improvements in software tool production to affect both.

    For the first time since AMD went on its march down APU lane, the go-to-market messaging with Kaveri is heavily weighted towards gaming. With Llano and Trinity AMD would try to mask CPU performance difficiencies by blaming benchmarks or claiming that heterogeneous computing was just around the corner. While it still believes in the latter, AMD's Kaveri presentations didn't attempt to force the issue and instead focused heavily on gaming as the killer app for its latest APU. HSA and heterogeneous computing are still important, but today AMD hopes to sell Kaveri largely based on its ability to deliver 1080p gaming in modern titles at 30 fps. Our testing looks favourably on this claim with some titles getting big boosts over similar powered Richland counterparts, although the devil is in the details.

    The feature set from Richland to Kaveri gets an update all around as well, with a fixed function TrueAudio DSP on the processor to offload complex audio tasks – AMD claims that reverb added to one audio sample for 3+ seconds can take >10% of one CPU core, so using the TrueAudio system allows game developers to enhance a full surround audio with effects, causing more accurate spatialization when upscaling to 7.1 or downscaling to stereo from 5.1. TrueAudio support unfortunately remains unused at launch, but Kaveri owners will be able to leverage the technology whenever games launch with it. Alongside TrueAudio, both the Unified Video Decoder (UVD) and the Video Coding Engine (VCE) are upgraded.

    One of the prominent features of Kaveri we will be looking into is its HSA (Heterogenous System Architecture) – the tight coupling of CPU and GPU, extending all the way down to the programming model. Gone are the days when CPU and GPU cores have to be treated like independent inequals, with tons of data copies back and forth for both types of cores to cooperate on the same problem. With Kaveri, both CPU and GPU are treated as equal class citizens, capable of working on the same data in the same place in memory. It'll be a while before we see software take advantage of Kaveri's architecture, and it's frustrating that the first HSA APU couldn't have come with a different CPU, but make no mistake: this is a very big deal. The big push on AMD’s side is the development of tools for the major languages (OpenCL, Java, C++ and others) as well as libraries for APIs to do this automatically and with fewer lines of code.

    Kaveri will support OpenCL 2.0, which should make it the first CPU/APU/SoC to carry that certification.

    The Kaveri Lineup: Desktop Sweet Spot at 45W

    For years now Intel has been targeting mobile first with its CPU architectures. More recently NVIDIA started doing the same with its GPUs (well, ultra-mobile first). With Haswell, Intel's architecture target shifted from 35 - 45W down to 10 - 20W, effectively making Ultrabook form factors the target for its CPU designs. Intel would then use voltage scaling to move the architecture up/down the stack, with Atom and Quark being used to go down to really low TDPs.

    For AMD, Kaveri truly embraces the mobile first approach to design with a platform target of 35W. AMD is aiming higher up the stack than Intel did with Haswell, but it also has a lower end CPU architecture (Jaguar) that shoots a bit above Atom. I suspect eventually AMD will set its big architecture sights below 35W, but for now AMD plays the hand it was dealt. The Kaveri project was started 4 years ago and the Haswell platform retargeting was a mid-design shift (largely encouraged by Apple as far as I can tell), so it's not surprising to see Kaveri end up where it does. It's also worth pointing out that the notebook designs AMD primarily competes in are larger 35W machines anyways.

    AMD's mobile roadmap states that we'll see Kaveri go all the way down to 15W (presumably in a 2-core/1-module configuration):

    Kaveri mobile however appears to be a mid 2014 affair; what launches today are exclusively desktop parts. With an aggressive focus on power consumption, AMD's messaging around Kaveri is simply more performance at the same power.

    Here are the Bulldozer based processors for each of AMD’s main desktop target segments: 45W, 65W and 95-100W:

    AMD 45W Bulldozer Based APUs  Trinity Richland Kaveri Model - A8-6500T A8-6700T A8-7600 Core Name - Richland Richland Kaveri Microarch - Piledriver Piledriver Steamroller Socket - FM2 FM2 FM2+ Modules/Cores - 2/4 2/4 2/4 CPU Base Freq - 2100 2500 3100 Max Turbo - 3100 3500 3300 TDP - 45W 45W 45W L1 Cache - 128KB I$ 64 KB D$ 128 KB I$ 64 KB D$ 192 KB I$ 64 KB D$ L2 Cache - 2x2 MB 2x2 MB 2x2 MB Graphics - HD 8550D HD 8650D R7 GPU Cores - 256 284 384 GPU Clock - 720 720 720 Max DDR3 - 1866 1866 2133 Current Price - N/A N/A $119

    Actually, the 45W segment is almost a cop out here. AMD never released a 45W desktop edition of Trinity, and while it formally released a couple of 45W Richland APUs back in August, I literally have not seen them for sale in the regular markets (US, UK) that I check. After my initial Kaveri pre-launch information article, one reader got in touch and confirmed that a mid-sized Italian etailer was selling them and had some in stock, but the majority of the world can't seem to get a hold of them. For the purpose of this review, AMD was kind enough to source retail versions of both the A8-6500T and A8-6700T for comparison points to show how much the system has improved at that power bracket.

    AMD 65W Bulldozer Based APUs  Trinity Richland Kaveri Model A6-5400K A8-5500 A10-5700 A8-6500 A10-6700 A8-7600 Core Name Trinity Trinity Richland Richland Richland Kaveri Microarch Piledriver Piledriver Piledriver Piledriver Piledriver Steamroller Socket FM2 FM2 FM2 FM2 FM2 FM2+ Modules/Cores 1/2 2/4 2/4 2/4 2/4 2/4 CPU Base Freq 3600 3200 3400 3500 3700 3300 Max Turbo 3800 3700 4000 4100 4300 3800 TDP 65W 65W 65W 65W 65W 65W L1 Cache 64 KB I$ 32 KB D$ 128 KB I$ 64 KB D$ 128 KB I$ 64 KB D$ 128 KB I$ 64 KB D$ 128 KB I$ 64 KB D$ 192 KB I$ 64 KB D$ L2 Cache 1 MB 2 x 2 MB 2 x 2 MB 2 x 2 MB 2 x 2 MB 2x2 MB Graphics HD 7540D HD 7560D HD 7660D HD 8570D HD 8670D R7 GPU Cores 192 256 384 256 384 384 GPU Clock 760 760 760 800 844 720 Max DDR3 1866 1866 1866 1866 1866 2133 Current Price $60 $99 N/A $119 N/A $119

    By comparison, AMD has a history of making 65W CPUs. You may notice that the Kaveri model listed is the same model listed in the 45W table. This is one of the features of AMD’s new lineup – various models will have a configurable TDP range, and the A8-7600 will be one of them. By reducing the power by about a third, the user sacrifices a margin of CPU base speed and turbo speed, but no reduction in processor graphics speeds. At this point in time, the A8-7600 (45W/65W) is set for a Q1 release rather than a launch day release, and we have not received details of any further configurable TDP processors.

    AMD 95-100W Bulldozer Based APUs  Trinity Richland Kaveri Model A8-5600K A10-5800K A8-6600K A10-6800K A10-7700K A10-7850K Core Name Trinity Trinity Richland Richland Kaveri Kaveri Microarchi Piledriver Piledriver Piledriver Piledriver Steamroller Steamroller Socket FM2 FM2 FM2 FM2 FM2+ FM2+ Modules/Cores 2/4 2/4 2/4 2/4 2/4 2/4 CPU Base Freq 3600 3800 3900 4100 3500 3700 Max Turbo 3900 4200 4200 4400 3800 4000 TDP 100W 100W 100W 100W 95W 95W L1 Cache 128KB I$ 64KB D$ 128KB I$ 64KB D$ 128KB I$ 64KB D$ 128KB I$ 64KB D$ 192KB I$ 64KB D$ 192KB I$ 64KB D$ L2 Cache 2 x 2 MB 2 x 2 MB 2 x 2 MB 2 x 2 MB 2 x 2 MB 2 x 2 MB Graphics HD 7560D HD 7660D HD 8570D HD 8670D R7 R7 GPU Cores 256 384 256 384 384 512 GPU Clock 760 800 844 844 720 720 Max DDR3 1866 1866 1866 2133 2133 2133 Current Price $100 $130 $120 $140 $152 $173

    Here we see the 32nm SOI to bulk 28nm SHP shift manifesting itself in terms of max attainable frequency. Whereas the A10-6800K ran at 4.1/4.4GHz, the A10-7850K drops down to 3.7/4.0GHz (base/max turbo). TDP falls a bit as well, but it's very clear that anyone looking for the high end of AMD's CPU offerings to increase in performance won't find it with Kaveri. I suspect we'll eventually see an AMD return to the high-end, but that'll come once we're done with the Bulldozer family. For now, AMD has its sights set on the bulk of the mainstream market - and that's definitely not at 95/100W.

    Kaveri Motherboard/Socket Compatibility

    AMD’s socket and chipset situation with Kaveri also adjusts slightly, maintaining a small difference to Richland. The new APUs will only fit into an FM2+ socket motherboard, which differs from FM2 by two pins, and Richland/Trinity APUs will also fit into FM2+. However, Kaveri APUs will not fit into any older FM2 motherboards. On the chipset side, AMD is adding the A88X chipset to the Bulldozer chipset family, complementing A55, A75 and A85X. Similar to Trinity and Richland, the chipset is not a definitive indicator of the socket of the motherboard, except for A88X: A88X will only appear on FM2+ motherboards.

    AMD has the workings of a potential platform changer, and certainly the programming paradigm change from ‘normal’ to HSA is one that is going to be at the forefront of AMD’s APU production for the foreseeable future.


    What Salaries Do Startup Founders Pay Themselves?

    $
    0
    0

    Comments:"What Salaries Do Startup Founders Pay Themselves?"

    URL:http://thenextweb.com/insider/2014/01/14/salary-founder-favorite-startup-get-probably-high-one/#!sbu1l


    If you’re a startup founder, how much should you pay yourself? Not very much, according to data that Compass has shared with The Next Web.

    The company collected salary data from 11,160 startups around the world that use its benchmarking tool. In Silicon Valley, 75% of founders pay themselves less than $75,000 per year and 66% pay themselves less than $50,000 per year, according to the data. Average salaries around the world vary from a low of $30,208 in India to a high of $72,363 in Australia.

    Back in 2008, investor Peter Thiel posited that a low CEO salary was the best predictor of startup success:

    “The lower the CEO salary, the more likely it is to succeed. “The CEO’s salary sets a cap for everyone else.  If it is set at a high level, you end up burning a whole lot more money. It aligns his interest with the equity holders.  But [beyond that], it goes to whether the mission of the company is to build something new or just collect paychecks. “In practice we have found that if you only ask one question, ask that.”

    While you could argue that having a product that people want to pay for is more important to success than the salary of top executives, being cost-conscious is certainly a sensible mindset for the responsible founder. It seems that the majority of startups align their own salaries with this thinking.

    With the exception of India, the salary ranges for startup founders around the world (see the charts at the end of this post) are remarkably similar. Low founder pay is a particularly striking factor in Silicon Valley, when you consider the high cost of living in cities like San Francisco.

    Does the stage that a startup is at affect its founders’ salaries? Not as much as you might think. Founders tend to keep their salaries below $45,000 per year until they hit a high-growth product phase. Even then, the average salary for a startup with a mature product is $70,109 – nowhere near enough for a Scrooge McDuck lifestyle.

    Funding level appears to be a stronger influence on founder salary, as you might expect, but again, we’re not talking “I’m just going for a swim in my money bin, boys” cash here.

    Few startups are transparent when it comes to salaries, so it’s interesting to see at least some aggregate data about the salary landscape as a whole. While the image of the startup founder as a champagne-quaffing, private-jet-hiring spendaholic may still remain for some who remember the days of the first dotcom bubble, it’s a thing of the past for most, it seems. Best save that for after the big exit, eh?

     

    Dropbox and Uber: Worth Billions, But Still Inches From Disaster | Wired Business | Wired.com

    $
    0
    0

    Comments:"Dropbox and Uber: Worth Billions, But Still Inches From Disaster | Wired Business | Wired.com"

    URL:http://www.wired.com/business/2014/01/dropbox-uber/


    Dropbox went dark over the weekend.

    According to the company, the widespread outage was the result of a bug it introduced while updating the hundreds of computer servers that drive its massively popular file-sharing service. But the problem was bigger than that. The San Francisco-based startup not only faced countless complaints from users across the net, it was forced to deflect rumors that the service was hacked, something that turned out to be a hoax.

    On one level, a dust-up like this is just part of life as a startup. Things go wrong, people get upset, problems are solved, lessons are learned. But the stakes are higher when you’re Dropbox — or any other tech startup that has ascended to the misty heights of the billion-dollar club. This weekend’s Dropbox outage, along with recent problems for Uber and Snapchat, show just how close such companies skate to complete disaster — not because of anything they necessarily did wrong, but because of the very nature of their businesses.

    In those tender days between two-scrappy-founders-in-an-apartment and established business, these burgeoning outfits have hundreds of millions of dollars invested in their future, and that future is far from certain. In an age when people can so easily abandon one web service for another, a single screw-up is all it can take to bring things crashing down for good. And the best of these companies know it.

    The most successful tech giants — think Google and Facebook — have been able to insulate themselves from the big SNAFU by performing well for long enough that we become inescapably dependent on them. For many of us, Gmail would have to delete our entire accounts before switching became even plausible anymore. But even for billion-dollar companies still in a period of massive growth, such cushions aren’t always there to catch them. If they fall, the landing could still be hard.

    Do One Thing, Do It Best

    In an interview with WIRED this past fall, Dropbox co-founder Drew Houston acknowledged that his company has almost no margin for error. If Dropbox accidentally destroyed just one person’s file, he said, it could erode the trust of all its users. “This is like the same sort of genre of problem as the code that you use to fly an airplane. Even if it’s a little bug, it’s a big problem.”

    The risk for Dropbox is that at its core, it essentially does only one thing: It syncs your files across all your devices. On the one hand, this single-mindedness has brought Dropbox its tremendous success. Founded in 2007, the company concentrates on doing thing and doing it well. But that strength is also its greatest vulnerability — Dropbox is not diversified. Many other companies large and small now offer much the same service. If Dropbox loses your trust by messing up the one thing you thought it did best, you could easily switch your allegiance to another company.

    ‘This is like the same sort of genre of problem as the code that you use to fly an airplane. Even if it’s a little bug, it’s a big problem’

    — Drew Houston

    In my several years using the service, I have never had a file lost or corrupted. Its tool for uploading photos from mobile devices is a breeze. And like the best-designed products, it works so effectively that it fades into the background.

    This weekend, it didn’t work. But files weren’t lost or destroyed, the company says. It’s just that people couldn’t reach them. “Your files were never at risk during the outage,” Dropbox engineer Akhil Gupta wrote. The databases affected, he said, “do not contain file data.”

    Rather than diversifying, Houston has worked to hire some of the smartest talent in tech, including the inventor of the Python programming language, to find the best answers and buffer against problems like this weekend’s outage. So far, that seems to have paid off.

    This weekend’s blip won’t put many people off Dropbox. If they’re like me, users have come to rely heavily on Dropbox for quickly storing and sharing files and generally getting work done. And Dropbox has created a strong well of trust from which it can draw. If it had branched into other services at the expense of its core syncing service, you can bet that trust wouldn’t be there.

    How Uber Goes Under

    Uber, the ride-sharing startup, is facing its own moment of crisis. Over the past months, many people — and many news stories — have complained about the company’s “surge pricing,” where it raises fares during times when lots of people want a ride, such as during snowstorms and over holidays. Now, a different kind of anger has surfaced: Paris-based blog Rude Baguette reports that, in France, protestors are attacking Uber cars, throwing eggs, slashing tires, and breaking windows. Uber confirmed the attacks.

    The violence comes as the French government attempts to address complaints that app-based car services like Uber are undermining the traditional taxi industry. The U.S. taxi industry feels similar ill-will toward Uber for undermining its business model, but it has turned to the courts and city councils to protect its interests.

    ‘If you are unreliable, customers just disappear. The thing is that nowhere in any of the press are you hearing about us being unreliable’

    — Travis Kalanick

    Surge pricing and conflict with taxi services might seem like separate issues. But both reflect the consequences of Uber’s choice to stake its success, like Dropbox, to an unbending vision of doing one thing exceptionally well. It too is not diversified. Uber could make the recent complaints go away fairly quickly. It could drop surge pricing. And it could acquiesce and change its service in cities where government and industry have come out against it. But Uber doesn’t do either of these things, because in the eyes of its outspoken CEO Travis Kalanick, backing down would compromise the foundation of his business: to provide a great ride.

    Surge pricing, according to Uber, is intended to stimulate supply and curb demand to ensure the two match. Otherwise, the logic goes, would-be riders are left stranded without a car. Last month, during the height of the backlash against Uber over fares reported at seven times the usual during a New York snowstorm, Kalanick told WIRED that the bad publicity his company faced over surge pricing would pale compared to the impact of Uber not being able to offer a ride at all.

    “If you are unreliable, customers just disappear,” he said. “The thing is that nowhere in any of the press are you hearing about us being unreliable.”

    If rides don’t come through, or are slow to arrive, the reason for Uber’s existence disappears. Kalanick is a professed admirer of Amazon founder and CEO Jeff Bezos, an Uber investor. Amazon now does a lot of things well, but if it stopped delivering the products people ordered from the site quickly and accurately, it would fold. Uber sees itself as offering a similar level of service for rides. If it backed down, if the rides stopped showing up, in Uber’s eyes, it would be like the Amazon box you ordered not arriving on your doorstep.

    This is also why Uber believes it can’t compromise once it begins to offer its services in a new city, or a new country, like France. It strives to work with regulators to accommodate its existing service rather than changing how it works. As a company, Uber is as much a designer of algorithms as a provider of rides. On the streets, Uber guides cars to certain places at certain times. Back at its San Francisco headquarters, teams of mathematicians and data scientists are figuring out how to guide them. And the math is hard enough without additional constraints.

    Yes, the constraints still come. A new French law that requires drivers to wait 15 minutes between taking a reservation and picking up a passenger. But the company has shown a defiant reluctance to put constraints on itself.

    Such stubbornness is often seen as arrogance: the hotshot, elitist startup that believes it’s above the rules. But Uber has made the choice that getting bashed on Twitter — or by City Hall — isn’t as bad as customers opening up the app and seeing no rides on the map. The first threat is manageable. The second is existential — customers just open up another ride-sharing app to see if an Uber competitor has cars instead.

    Snapchat Disappears

    Snapchat’s moment of weakness was more acute, and the company appeared to deal with it the worst. Recently, a group of hackers exploited a known security hole in the private messaging service, leaking the phone numbers of millions of users. Snapchat co-founder and CEO Evan Spiegel faced a barrage of withering criticism for failing to apologize quickly for the incident and the sometimes peevish posture the company has taken in the wake of the leak.

    For a company whose whole business is based on privacy, such a breach is a serious threat to its livelihood — not least because so many others are now angling to offer similar services.

    The cautionary tale here is Friendster. As the startup that was Facebook before Facebook, Friendster was the one of the first companies to gain genuine traction in what was then widely called “Web 2.0.” But then it stopped working the way people wanted it to. It didn’t even screw up that badly — accounts of its demise describe fairly typical management and technical issues. But even this can bring down a startup in a digital world that moves so quickly. Facebook arrived and also did the one thing Friendster did — connecting people — but better. And for Friendster, it was too late.

    For Snapchat, a meaningful apology isn’t about good manners. It’s about showing it appreciates the gravity of its violation of trust. As is making sure it doesn’t happen again. As with Dropbox and Uber, Snapchat has earned the love of its users by doing one thing they love really well. Take that thing away, and the love goes with it.

    PSD to HTML is Dead - Treehouse Blog

    $
    0
    0

    Comments:"PSD to HTML is Dead - Treehouse Blog"

    URL:http://blog.teamtreehouse.com/psd-to-html-is-dead


    PSD to HTML tutorials are all over the web. In fact, many people have asked me why there’s not a PSD to HTML tutorial on Treehouse. In addition to the tutorials, there are lots of companies that will accept a PSD and convert it to a webpage for roughly $100 USD.

    Google returns more than 48 million results for a “psd to html” search. It’s popular, but not the best way to make websites.

    If it’s so popular, then how can I say that it’s dead? Well… I wish every web design quandary could fit into a poetic 140 character tweet, but this is a fuzzy issue that demands a more articulate explanation. Let’s dig in.

    What is PSD to HTML?

    In general, “PSD to HTML” is a workflow. First, a web page is designed in a Photoshop Document (PSD) and then converted to code (using HTML, CSS, and JavaScript). You could swap Photoshop with any other image editor (like Pixelmator, GIMP, and so on), but the principle is the same. Here’s a slightly more detailed step-by-step breakdown:

    Design a high fidelity pixel-perfect mockup in Photoshop of exactly what you want your site to look like. Use the slice tool to divide your website’s imagery and then export it for the web. Write HTML and CSS that utilizes the imagery you exported from Photoshop.

    At first glance, this might seem like a good idea. It can be difficult to start coding if you don’t know what the final result is going to look like, so experimenting in Photoshop first and then “exporting” it to HTML sounds like a granular and sensible process.

    In Photoshop, the slices feature in the save for web dialog used to be an essential tool for designers saving assets from a PSD. It made it easy to “slice” a design into images and then layout in a web page using HTML and CSS.

    Taking this idea further, many web companies have used PSD to HTML as a template for team workflows. In other words, a designer creates the Photoshop mockup and then hands it over to a developer that writes all of the code. In modern times, the job role of a web designer tends to encompass aesthetics as well as HTML and CSS coding.

    Was PSD to HTML ever a good idea?

    Yes, PSD to HTML workflows used to be one of the best ways to make websites. There’s two big reasons why PSD to HTML used to make sense.

    The first reason is for image assets. Before browsers supported all the wonderful features of modern CSS (drop shadows, rounded corners, gradients, and more) it was very difficult to create cross-browser effects without the use of images. Designers would create shadows and rounded corners as images, then clever coding tricks were used to place the imagery on the page. These assets would need to be realized no matter what, so creating them at the same time as the high fidelity mockup actually saved time.

    Prior to the development and widespread adoption of CSS, many websites were a collection of image assets that looked something like this. One of the most innovative techniques of the time was the sliding doors technique to create tabs back in 2003.

    Secondly (and perhaps more importantly) the web used to only be available on desktop browsers and wasn’t really present on phones and tablets in the way it is today. Designing for one fixed resolution of 1024×768 used to be totally viable.

    For these two reasons, it’s understandable why a designer would look to Photoshop as their primary web design tool. Image assets were needed for a single screen resolution.

    What’s wrong with PSD to HTML now?

    When pitted against other areas of art and technology, the web is a relatively young medium and things change fast. I’ve made dozens of websites using some variation of the PSD to HTML mindset and I’m sure many people reading this have done the same, but it’s time to move on. Here are the primary reasons why I believe thinking in terms of PSD to HTML is dead.

    Responsive Web Design

    First, there are now a myriad of methods for browsing the web. Phones, tablets, desktops, notebooks, televisions, and more. There is no single screen resolution that a designer can target. Taking that idea a few steps further, there’s really no number of screen resolutions that you can safely “target” anymore.

    Screensiz.es provides tables of information about popular hardware devices.

    I’m not going to delve into the finer details of responsive web design or scalable design, but the point is that Photoshop is pixel based. Web pages are fluid and change.

    CSS Design

    Second, new features in CSS have now become commonly available. There are still a few lingering issues here and there, but support has vastly improved in the last several years. Common effects like shadows, gradients, and rounded corners can be accomplished in CSS and usually don’t even need an image-based fallback anymore.

    Maturity

    Third, the web industry has grown up a lot. Collectively we’ve had more time to refine our present understanding of what works and what doesn’t. Most companies will expect a designer to take ownership of aesthetics as well as HTML and CSS code.

    This also means there are much better tools to support modern workflows. CSS frameworks like Bootstrap and Foundation make it more viable to design in the browser. Apps like Balsamiq and Omnigraffle help to wireframe sites rapidly. Pencil and paper mockups have stood the test of time because they allow for extremely rapid iteration.

    Does this mean Photoshop is dead?

    No! Not even close. Photoshop is still very important to web design. The problem comes in when a powerful tool like Photoshop is used as a catch all solution without thinking of the higher level task (designing websites). Photoshop is awesome for editing and exporting photographs for web usage. There’s also plenty of situations where it still might make sense to generate full detail mockups (in Photoshop, Illustrator, or otherwise) as part of a more complete process. Here are a couple of examples:

    • High fidelity mockups can be a critical communication tool when working with web design clients. It might seem faster to skip a high detail mockup, but it could hurt later on, because many clients aren’t going to understand how a wireframe will translate to a web browser. A high fidelity mockup can serve as a discussion tool before writing lots of code (only to discover it’s not what the client wanted).
    • High fidelity mockups can be very important when working in medium to large sized teams. We often will create high res mockups at Treehouse when planning new courses or designing new features of our site, because it’s a powerful way to sync everyone’s mental model of what a feature will look like or how a project might look once it’s finished.

    These two examples have a key difference from the PSD to HTML way of thinking. High detail mockups are still sometimes generated, but not so that they can be “tossed over the fence” to a team of developers or sliced up into code. Rather, Photoshop mockups can be used as a visual aid to discuss ideas. In a PSD to HTML workflow, the Photoshop document represents the final site and it’s expected to look exactly the same in the browser. This is a subtle but important difference.

    Different Strokes

    Everyone’s workflow is different and nobody knows how to make the perfect website. You should always do whatever is most effective for you and your colleagues. Pushing pixels around in Photoshop is a ton of fun, but I can admit to many occasions when I’ve pushed the pixels too far. The key is to know yourself and what makes you perform at your best. If you have any questions or opinions, I’d love to hear about them in the comments!

    I wish I had another hand so I could give Scala three thumbs down!

    $
    0
    0

    Comments:"I wish I had another hand so I could give Scala three thumbs down!"

    URL:http://www.theserverside.com/news/thread.tss?thread_id=78441


    TheServerSide's going to be chatting with Giles Alexander, a lead developer at ThoughtWorks, in the upcoming week or so. We last chatted with him just over a year ago about his experiences with mobile application development, which produced a couple of very popular articles, including a couple on the Y-shaped mobile development method which got retweeted and blogged about ad-infinitum.

    Interview with Giles Alexander: The Y-shaped mobile method
    Article: Modern mobile development techniques: The Y-shaped methodology

    I pinged Giles (@gga) on Twitter last weekend, suggesting that we catch up. He's got his fingers in a lot of pies, which makes him a good bellwether for what's happening in the industry. He indicated that he had less to say about mobile these days, but had plenty to say about Scala, as he pointed me to a recently published blog entry, unaffectionately titled Scala: 1-star, would not program again. Apparently, Giles isn't exactly enamored with the language, or as he says, " I don’t ever want to touch Scala again."

    With Scala you feel smart having just got something to work in a beautiful way but when you look around the room to tell your clojure colleague how clever you are, you notice he left 3 hours ago and there is a post-it saying use a Map.- Daniel Worthington-Bodart

    The problems he cites? Crazy slow compile times that make test driven development (TDD) completely impractical. Stability of the libraries is a concern. The punctuation strewn syntax that supposedly keeps the language flexible, but instead, makes it incredibly difficult to discern bears mentioning. The list is long, and Giles has no problem rhyming off annoyances: "The confused array of build tools. The hopeless confusion of even the most powerful of IDEs. The enormous array of types of types. The horrible repetition required by case classes."

    New languages are indubitably exciting to learn and play with, and everyone is interested in improving upon what we already have. Sometimes it appears that languages like Scala or Clojure or Ceylon are the fix that the Java ecosystem needs to improve productivity and obtain that linear scalability that simply doesn't exist with traditionally written Java applications. But the fact is, Java, despite some misgivings, is a well thought out language that is both powerful and consistent, making it easy to learn, and more importantly, easy to maintain. Sure, new systems will appear that will try to knock the crown off the Java language, but for now, the want-to-be emperors of the JVM are increasingly being shown to be wearing no clothes.

    As was mentioned earlier, TheServerSide is going to be talking with Giles in the near future. If you've got an issue or two you would like us to take him to task on in terms of his assessment of Scala, let us know.

    Scala - 1 Star: Would Not Program Again by Giles Alexander

    You can follow Giles Alexander on Twitter: @gga
    You can also follow Cameron McKenzie: @potemcam

     

    Georgia Tech Researchers Reveal Phrases that Pay on Kickstarter | College of Computing

    $
    0
    0

    Comments:"Georgia Tech Researchers Reveal Phrases that Pay on Kickstarter | College of Computing"

    URL:http://www.cc.gatech.edu/news/georgia-tech-researchers-reveal-phrases-pay-kickstarter



    New Georgia Tech Study Finds Pitch Language Plays Major Role in Success of Crowdfunding Projects

    January 14, 2014

    Researchers at Georgia Tech studying the burgeoning phenomenon of crowdfunding have learned that the language used in online fundraising hold surprisingly predictive power about the success of such campaigns. 

    As part of their study of more than 45,000 projects on Kickstarter, Assistant Professor Eric Gilbert and doctoral candidate Tanushree Mitra reveal dozens of phrases that pay and a few dozen more that may signal the likely failure of a crowd-sourced effort. 

    “Our research revealed that the phrases used in successful Kickstarter campaigns exhibited general persuasion principles,” said Gilbert, who runs the Comp. Social Lab at Georgia Tech. “For example, those campaigns that follow the concept of reciprocity – that is, offer a gift in return for a pledge – and the perceptions of social participation and authority, generated the greatest amount of funding.” 

    While offering donors a gift may improve a campaign’s success, the study found the language project creators used to express the reward made the difference. For example, the phrases “also receive two,” “has pledged” and “project will be” strongly foretell that a project will reach funding status, while phrases such as “dressed up,” “not been able” and “trusting” are attached to unfunded projects. 

    The researchers examined the success of Pebble, which is the most successful Kickstarter campaign to date with more than $10 million in pledges, and compared it to Ninja Baseball, a well-publicized PC game that only earned a third of its $10,000 goal. 

    “The discrepancy in funding success between projects like Pebble and Ninja Baseball prompted us to consider why some projects meet funding goals and others do not,” Mitra said. “We found that the driving factors in crowdfunding ranged from social participation to encouragement to gifts – all of which are distinguished by the language used in the project description.” 

    For their research, Gilbert and Mitra assembled a list of all Kickstarter projects launched as of June 2, 2012, and had reached their last date of fund collection. Of the more than 45,000 projects, 51.53 percent were successfully funded while 48.47 percent were not. 

    After controlling for variables such as funding goals, video, social media connections, categories and pledge levels, the researchers focused on more than 20,000 phrases before compiling a dictionary of more than 100 phrases with predictive powers of success or failure. 

    The research suggested that the language used by creators to pitch their project plays a major role in driving the project’s success, accounting for 58.56 percent of the variance around success. The language generally fit into the following categories: 

    • Reciprocity or the tendency to return a favor after receiving one as evidenced by phrases such as “also receive two,” “pledged will” and “good karma and.” 
    • Scarcity or attachment to something rare as shown with “option is” and “given the chance.” 
    • Social Proof, which suggests that people depend on others for social cues on how to act as shown by the phrase “has pledged.” 
    • Social Identity or the feeling of belonging to a specific social group. Phrases such as “to build this” and “accessible to the” fit this category. 
    • Liking, which reflects the fact that people comply with people or products that appeal to them. 
    • Authority, where people resort to expert opinions for making efficient and quick decisions as shown by phrases such as “we can afford” and “project will be.” 

    The team’s findings are summarized in the paper “The Language that Gets People to Give: Phrases that Predict Success on Kickstarter.” The paper will be formally presented at the 17th ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW 2014) to be held in Baltimore, Md., from Feb. 15 to 19.

    Viewing all 9433 articles
    Browse latest View live