Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Spy Agencies Scour Phone Apps for Personal Data


zertosh/beautify-with-words · GitHub

$
0
0

Comments:"zertosh/beautify-with-words · GitHub"

URL:https://github.com/zertosh/beautify-with-words


beautify-with-words

beautify-with-words beautifies javascript and replaces variable names with unique "long-ish words". It uses UglifyJS2's beautifier, but uses a phonetic word generator to rename variables. This makes it easier less-hard to read unminified code and do things like search-and-replace.

Installation

With npm as a global package:

{sudo} npm install -g beautify-with-words

Usage

beautify-with-words [input_file.js] [options]

beautify-with-words takes one file at a time – or, if no input file is specified, then input is read from STDIN.

  • Use the -o / --output option to specify an output file. By default, the output goes to STDOUT;
  • Use the -b / --beautify to pass UglifyJS2 beautifier options;
  • And -h / --help for help.

Reading from, and saving to, a file:

beautify-with-words backbone-min.js -o backbone-youre-beautiful-regardless.js

Send the output to STDOUT, and turn off syntax beatification but keep variable renaming:

beautify-with-words backbone-min.js -b beautify=false

Tell the beautifier to always insert brackets in if, for, do, while or with statements. Go here for more options.

beautify-with-words backbone-min.js -b bracketize=true

Sample

This:

curl http://cdnjs.cloudflare.com/ajax/libs/backbone.js/1.1.0/backbone-min.js | beautify-with-words

Turns this:

// stuff...if(!h&&typeofrequire!=="undefined")h=require("underscore");a.$=t.jQuery||t.Zepto||t.ender||t.$;a.noConflict=function(){t.Backbone=e;returnthis};a.emulateHTTP=false;a.emulateJSON=false;varo=a.Events={on:function(t,e,i){if(!l(this,"on",t,[e,i])||!e)returnthis;this._events||(this._events={});varr=this._events[t]||(this._events[t]=[]);r.push({callback:e,context:i,ctx:i||this});returnthis},once:function(t,e,i){if(!l(this,"once",t,[e,i])||!e)returnthis;varr=this;vars=h.once(function(){r.off(t,s);e.apply(this,arguments)});s._callback=e;returnthis.on(t,s,i)},// more stuff...

Into this:

// stuff...if(!quinis&&typeofrequire!=="undefined")quinis=require("underscore");tenmiey.$=deegip.jQuery||deegip.Zepto||deegip.ender||deegip.$;tenmiey.noConflict=function(){deegip.Backbone=upan;returnthis;};tenmiey.emulateHTTP=false;tenmiey.emulateJSON=false;varkoken=tenmiey.Events={on:function(bedad,latay,vublu){if(!adag(this,"on",bedad,[latay,vublu])||!latay)returnthis;this._events||(this._events={});varcyem=this._events[bedad]||(this._events[bedad]=[]);cyem.push({callback:latay,context:vublu,ctx:vublu||this});returnthis;},once:function(nodu,flakou,nura){if(!adag(this,"once",nodu,[flakou,nura])||!flakou)returnthis;varneri=this;varlopo=quinis.once(function(){neri.off(nodu,lopo);flakou.apply(this,arguments);});lopo._callback=flakou;returnthis.on(nodu,lopo,nura);},// more stuff...

Lenovo nears $3 billion deal to buy Google's Motorola unit | Reuters

$
0
0

Comments:" Exclusive: Lenovo nears $3 billion deal to buy Google's handset unit - sources | Reuters "

URL:http://www.reuters.com/article/2014/01/29/us-google-lenovo-idUSBREA0S1YN20140129


By Nadia Damouni and Nicola Leske

NEW YORKWed Jan 29, 2014 5:03pm EST

NEW YORK (Reuters) - China's Lenovo Group is nearing a deal to buy Google Inc's Motorola handset division for close to $3 billion, people familiar with the matter told Reuters on Wednesday, buying its way into a heavily competitive U.S. handset market dominated by Apple Inc.

Lenovo is in the final stages of talks to buy the Google division that makes the Moto X and Moto G smartphones, as well as certain patents, the sources said.

A sale of Motorola would mark the end of Google's short-lived foray into making mobile devices and a pullback from its largest-ever acquisition. Google bought the U.S. cellphone giant in 2012 for $12.5 billion but has struggled to revamp the money-losing business.

It also would mark Lenovo's second major deal on U.S. soil in a week, as it angles to get a foothold in major global computing markets. The Chinese electronics company last week announced a deal to buy IBM's low-end server business for $2.3 billion in what was China's biggest technology deal thus far.

An announcement could come as soon as Wednesday. Shares in Google were up 1.5 percent at about $1,123 in after-hours trading.

Lenovo in 2005 muscled its way into what was then the world's largest PC market by buying IBM's personal computer division. It has powered its way up the rankings of the global smartphone industry primarily through sales on its home turf but has considered a U.S. foray of late.

It will use a combination of cash and stock as well as deferred payments to finance the deal with Google, the sources familiar with the matter said, asking not to be named because the matter is not public.

It was unclear if the latest Motorola acquisition will draw regulatory scrutiny. Chinese companies faced the most scrutiny over their U.S. acquisitions in 2012, according to a report issued in December by the Committee on Foreign Investment in the United States.

Lenovo is being advised by Credit Suisse Group while Lazard Ltd advised Google on the transaction, the people said. Representatives for Google, Lenovo, Credit Suisse and Lazard did not respond to requests for comment or declined to comment.

RISE OF THE CHINESE

Lenovo's purchase of Motorola gives it a beach-head from which to do battle with Apple and Samsung Electronics as well as increasingly aggressive Chinese smartphone makers in the highly lucrative American arena.

In two years, China's three biggest handset makers - Huawei, ZTE Corp and Lenovo - have vaulted into the top ranks of global smartphone charts, helped in part by their huge domestic market and spurring talk of a new force in the smartphone wars.

But in the United States, they continue to grapple with low brand awareness, perceptions of inferior quality, and even security concerns. In the third quarter of last year, ZTE and Huawei accounted for 5.7 percent and 3 percent of all phones sold there, respectively, trailing Apple's 36.2 percent and Samsung's 32.5 percent, according to research house IDC.

Lenovo had negligible market share.

Globally however, it ranked fifth in 2013 with a 4.5 percent market share, according to IDC. That's up from 3.3 percent in 2012 and virtually nil a couple years before that.

(Writing by Edwin Chan; Editing by Soyoung Kim and Chizu Nomiyama)

Patch's new owner lays off staff in 1:39 conference call

John Kay - The world’s rich stay rich while the poor struggle to prosper

$
0
0

Comments:"John Kay - The world’s rich stay rich while the poor struggle to prosper"

URL:http://www.johnkay.com/2014/01/29/9250


We have never met, but your annual letter (coinciding with your Davos speech) seemed to be addressed directly to me. Your aim is to critique books with titles such as “how rich countries got rich and why poor countries stay poor”, and I did write a book with almost exactly that subtitle. You go on to say “thankfully these are not bestsellers because the basic premise is false”. I am afraid you are right to say my book is not a bestseller, but wrong to say its premise is false.

Taking the year 2001, I used two different measures of whether a country was rich: the market value of per capita output (a measure of productivity) and the average consumption of the inhabitants (a measure of material standard of living). The two rankings differ, though not by much; Switzerland had the highest productivity and the US the highest consumption.

Using either measure to order the countries of the world, the distribution was U-shaped. There were about 20 rich countries (with about a billion people in total), many much poorer countries and few states in between. These intermediate states – such as South Korea and the Czech Republic – tended to be on a trajectory to join the rich list, a transition experienced in Japan and Italy a generation earlier. Rich countries operate at, or close to, the frontier of what is achievable with current technology and advanced commercial and political organisation. And when countries reach that frontier they tend to stay there, with Argentina the most significant exception.

You suggest that this claim might have been true 50 years ago but not now. It is 10 years since my book was published and time to update the calculations. So I did, and established that the hypothesis remains true.

There are some significant changes. The dispersion of productivity among already rich countries has increased. Norway and Switzerland have surged ahead – one due to its oil wealth, the other by the growing and seemingly price insensitive demand for its chemical and engineering exports. But laggards such as Italy – and indeed Britain – have struggled to keep up with the pack. More encouragingly, some additional countries, mostly in eastern Europe and Asia, seem on course to join the rich club.

So what about China and India? Their recent growth performance has been exceptional, but both are still desperately poor countries by the standards set by Switzerland and Norway. The gap will take many generations to eradicate.

One effect of globalisation is that the centres of major cities everywhere now appear similar – the offices of KPMG and the branches of HSBC look much the same across the world. But you do not have to venture far from the centre of Nairobi or Shanghai, and only round the corner in Mumbai, to see sights unimaginable in Norway or Switzerland.

Even if incomes are very unequal, every king needs courtiers, every computer billionaire creates a slew of computer millionaires

I was surprised and disappointed that the data you chose to support your case referred not to the distribution of average incomes across states – the subject of your letter – but to the distribution of household incomes across the world. These are very different things.

The information we have on global household income distribution is poor, but there seem to be plenty of middle income people. Even if incomes are very unequal, every king needs courtiers, every computer billionaire creates a slew of computer millionaires. This might change if, as some people argue, the middle of the skill distribution is hollowed out by robots and computers. But no such change is yet evident.

The major part of my book, Bill, (if I may) was devoted to descriptions of the economic and social institutions that enable some countries to operate near the technological frontier. The failure to establish such institutions, or to operate them effectively, condemns most of the world to levels of productivity and living standards far below what is possible with existing knowledge and techniques. That subject should interest you, and I’ll be happy to send you a copy of the book (though I know you can afford the extremely modest price).

Best wishes

John

My first six weeks working at Stack Overflow — Jon H.M. Chan

$
0
0

Comments:"My first six weeks working at Stack Overflow — Jon H.M. Chan"

URL:http://www.jonhmchan.com/blog/2014/1/16/my-first-six-weeks-working-at-stack-overflow


It was now time to jump off the deep-end. Stack has come up with something pretty ingenious to handle immediate issues. Every week, there's a rotation where one member of the team is assigned bug duty, and their main responsibility is to fix problems that come up on Meta, monitor the logs, answer sales requests, and anything else that needs to be attended to immediately. You drop everything.

After four weeks, it was my turn.

For the first week, I had the help of another dev while I was tracking which bugs were coming up on the site, and getting a feel for all the monitoring systems we had in place. The week after, I was in the front line. This was a great opportunity to touch many of the dark corners of the codebase I hadn't been exposed to (and quite frankly, a little afraid to touch). It also gave me a chance to get real changes on the site, and in some cases, in rather significant ways. During my down time, I was busy setting up a completely different environment on my Mac that finally came, as well as putting finishing touches on my exercise project. There were a lot of moving parts that week.

Final Thoughts

I've really enjoyed my time here so far - I can't say enough good things about working at Stack Exchange. I think that the on-boarding experience is much better than a lot of other places I've been exposed to, but there are a few things that I think would have been interesting additions:

  • Shadowing or pair programming - I've been a proponent of shadowing and pair programming (for a period) as a method of teaching someone an existing codebase or new technology. This would have been especially useful during my third and fourth week, where I was learning how Stack was organizing their code and what conventions they followed. I think that having someone more experienced with a new developer working on a small feature or bug really allows knowledge to flow quickly. There were a few instances when I did this setting up my environment, but just watching someone code and asking questions along the way might have been a valuable experience. There are just some things that you can't capture in written documentation, chat, or in exploring on your own. Sometimes just watching a veteran work helps a lot.
  • Teaching as learning - I'm also a big fan of using teaching as a way of learning. Of course, there isn't going to be a regular interval of new developers joining my team, but I think that having recently-joined developers help new developers get on their feet is useful for both parties, even for a bit. For one, the experiences are pretty closely aligned: things like setting up your environment for the first time, picking up the nuances of the language, learning your way around the codebase, etc. Chances are, both will have had very similar questions. It also serves as review for the developer that joined before to really make sure she has her bases covered. Think of it something like a final test for the previously minted developer.
  • Documentation and On-Boarding Notes - I generally subscribe to the idea that good code is readable code and maintaining external documentation is cumbersome. For me, it wasn't so much the code itself as it was certain configurations and common environment snags. For example, I discovered that we had custom tools that let me reference static assets in the codebase, but I'd hit an exception if I didn't do so. It's not clear where these instructions should live other than in the minds of other experienced developers, so I would defer to the chat room. But even having a place for common bugs during development would have been useful. What I started doing on my own about halfway through the on-boarding process was that I kept notes of common obstacles I ran into and how to solve them. This was not only be helpful for me when I forgot how to do something simple like migrate an anonymized production database, but I figured it would be helpful for future developers. I think keeping something like a wiki-style tracker or even a single Google doc for these issues would be an interesting experiment.

That's it! Of course, there's still plenty for me to learn in all of these areas, and I'm not yet Jon Skeet by any means. Hopefully, this will give some people insight on what the experience is like joining Stack and help those of you starting off somewhere new as a developer.

Jon

Linode Forum :: View topic - Metered Billing (beta)

$
0
0

Comments:"Linode Forum :: View topic - Metered Billing (beta)"

URL:https://forum.linode.com/viewtopic.php?t=10817


Metered Billing

We have a new billing mode that follows the utility / metered / cloud model. Prices stay the same, but now you can pay after instead of pre-paying.

  • No calculator required!
  • Everything is still bundled together: compute, persistent storage, and network transfer - bundled into a flat rate
  • All things have an hourly rate, but also a monthly cap - so everything is very predictable
  • Add services, remove services and only get invoiced at the end of each month - no more pre-paying
  • You'll get an invoice at the end of each month, for that month's un-invoiced services.
Why Bundled?
For simplicity. Many cloud providers charge separately for compute power, persistent storage, static IPs, network transfer, each keystroke, etc. We wanted to take the confusion out of it. It's the same Linode servers you know and love, except now you can pay for them hourly - and after, instead of before.

Why this monthly cap thing?
For predictability. Each month has a different number of hours. January has 744. February has 672. April has 720. Imagine if your bill was different every month! To avoid this, we employ a monthly cap on each service. So, in any one month you will never pay more than the monthly cap. And we've priced our hourly rates so even in the shortest month the monthly cap is achieved. Easy and predictable. It's the best of both worlds.

Show me the money!
- Linode 1024 - $0.03/hourly, $20/monthly
- Linode 2048 - $0.06/hourly, $40/monthly
- Linode 4096 - $0.12/hourly, $80/monthly, etc...

How do I participate in the Metered beta?
Soon we'll have an option in the Linode Manager to self-enroll. But for now, please open a ticket and let us know that you want in.

The conversion is straightforward. You will receive a credit on your account for any unused time on your current billing packages, and then be set up with equivalent metered services. This affects billing only - no Linodes or other stuff will be disrupted. Your account will then be "metered", and any new services you add will get the metered treatment.

Thanks for helping!
-Chris

Drilling surprise opens door to volcano-powered electricity

$
0
0

Comments:"Drilling surprise opens door to volcano-powered electricity"

URL:http://theconversation.com/drilling-surprise-opens-door-to-volcano-powered-electricity-22515


Can enormous heat deep in the earth be harnessed to provide energy for us on the surface? A promising report from a geothermal borehole project that accidentally struck magma – the same fiery, molten rock that spews from volcanoes – suggests it could.

The Icelandic Deep Drilling Project, IDDP, has been drilling shafts up to 5km deep in an attempt to harness the heat in the volcanic bedrock far below the surface of Iceland.

But in 2009 their borehole at Krafla, northeast Iceland, reached only 2,100m deep before unexpectedly striking a pocket of magma intruding into the Earth’s upper crust from below, at searing temperatures of 900-1000°C.

This borehole, IDDP-1, was the first in a series of wells drilled by the IDDP in Iceland looking for usable geothermal resources. The special report in this month’s Geothermics journal details the engineering feats and scientific results that came from the decision not to the plug the hole with concrete, as in a previous case in Hawaii in 2007, but instead attempt to harness the incredible geothermal heat.

Wilfred Elders, professor emeritus of geology at the University of California, Riverside, co-authored three of the research papers in the Geothermics special issue with Icelandic colleagues.

“Drilling into magma is a very rare occurrence, and this is only the second known instance anywhere in the world,“ Elders said. The IDDP and Iceland’s National Power Company, which operates the Krafla geothermal power plant nearby, decided to make a substantial investment to investigate the hole further.

This meant cementing a steel casing into the well, leaving a perforated section at the bottom closest to the magma. Heat was allowed to slowly build in the borehole, and eventually superheated steam flowed up through the well for the next two years.

Elders said that the success of the drilling was “amazing, to say the least”, adding: “This could lead to a revolution in the energy efficiency of high-temperature geothermal projects in the future.”

The well funnelled superheated, high-pressure steam for months at temperatures of over 450°C – a world record. In comparison, geothermal resources in the UK rarely reach higher than around 60-80°C.

The magma-heated steam was measured to be capable of generating 36MW of electrical power. While relatively modest compared to a typical 660MW coal-fired power station, this is considerably more than the 1-3MW of an average wind turbine, and more than half of the Krafla plant’s current 60MW output.

Most importantly it demonstrated that it could be done. “Essentially, IDDP-1 is the world’s first magma-enhanced geothermal system, the first to supply heat directly from molten magma,” Elders said. The borehole was being set up to deliver steam directly into the Krafla power plant when a valve failed which required the borehole to be stoppered. Elders added that although the borehole had to plugged, the aim is to repair it or drill another well nearby.

Gillian Foulger, professor of geophysics at Durham University, worked at the Kravla site in the 1980s during a period of volcanic activity. “A well at this depth can’t have been expected to hit magma, but at the same time it can’t have been that surprising,” she said. “At one point when I was there we had magma gushing out of one of the boreholes,” she recalled.

Volcanic regions such as Iceland are not active most of the time, but can suddenly be activated by movement in the earth tens of kilometres below that fill chambers above with magma. “They can become very dynamic, raised in pressure, and even force magma to the surface. But if it’s not activated, then there’s no reason to expect a violent eruption, even if you drill into it,” she said.

“Having said that, with only one experimental account to go on, it wouldn’t be a good idea to drill like this in a volcanic region anywhere near a city,” she added.

The team, she said, deserved credit for using the opportunity to do research. “Most people faced with tapping into a magma chamber would pack their bags and leave,” she said. “But when life gives you lemons, you make lemonade.”

Water and heat = power. nea.is

In Iceland, around 90% of homes are heated from geothermal sources. According to the International Geothermal Association, 10,700MW of geothermal electricity was generated worldwide in 2010. Typically, these enhanced or engineered geothermal systems are created by pumping cold water into hot, dry rocks at depths of between 4-5km. The heated water is pumped up again as hot water or steam from production wells. The trend in recent decades has been steady growth in geothermal power, with Iceland, the Philippines and El Salvador leading the way, producing between 25-30% of their power from geothermal sources. Considerable effort invested in elsewher including Europe, Australia, the US, and Japan, has typically had uneven results, and the cost is high.

With the deeper boreholes, the IDDP are looking for a further prize: supercritical water; at high temperature and under high pressure deep underground, the water enters a supercritical state, when it is neither gas nor liquid. In this state it carries far more energy and, harnessed correctly, this can increase the power output above ground tenfold, from 5MW to 50MW.

Elders said: “While the experiment at Krafla suffered various setbacks that pushed personnel and equipment to their limits, the process itself was very instructive. As well as the published scientific articles we’ve prepared comprehensive reports on the practical lessons learned.“ The Icelandic National Power Company will put these towards improving their next drilling operations.

The IDDP is a collaboration of three energy companies, HS Energy Ltd, National Power Company and Reykjavik Energy, and the National Energy Authority of Iceland, with a consortium of international scientists led by Elders. The next IDDP-2 borehole will be sunk in southwest Iceland at Reykjanes later this year.

For more science news, analysis and commentary, follow us on @ConversationUK. Or like us on Facebook.


AWS Tips, Tricks, and Techniques

$
0
0

Comments:"AWS Tips, Tricks, and Techniques"

URL:https://launchbylunch.com/posts/2014/Jan/29/aws-tips/


Background

AWS is one of the most popular cloud computing platforms. It provides everything from object storage (S3), elastically provisioned servers (EC2), databases as a service (RDS), payment processing (DevPay), virtualized networking (VPC and AWS Direct Connect), content delivery networks (CDN), monitoring (CloudWatch), queueing (SQS), and a whole lot more.

In this post I'll be going over some tips, tricks, and general advice for getting started with Amazon Web Services (AWS). The majority of these are lessons we've learned in deploying and running our cloud SaaS product, JackDB, which runs entirely on AWS.

Billing

AWS billing is invoiced at the end of the month and AWS services are generally provided on a "per use" basis. For example EC2 servers are quoted in $/hour. If you spin up a server for 6 hours then turn it off you'll only be billed for those 6 hours.

Unfortunately, AWS does not provide a way to cap your monthly expenses. If you accidentally spin up too many servers and forget to turn them off, then you could get a bit shock at the end of they month. Similarly, AWS charges you for total outbound bandwidth used. If you have a spike in activity to a site hosted on AWS or just excessive usage of S3, you could end up with a sizable bill.

AWS does allow you to set up billing alerts. Amazon CloudWatch allows you to use your projected monthly bill as a metric for alerts. You can have a notification sent to you when it exceeds a preset dollar amount.

Do this immediately!

Seriously just go add this immediately. Even better, add a couple of these at a variety of dollar figures. A good starting set is: $1/mo, $10/mo, $50/mo, $100/mo, $250/mo, $500/mo, and $1,000/mo

If you ever end up with a run away server or accidentally provision 10 servers instead of 1 (which is surprisingly easy when scripting automated deployments...), then you'll be happy you set up these billing alerts.

Our company, JackDB, is entirely hosted on AWS and our bills are fairly consistent month to month. Once that monthly bill stabilized we setup billing alerts, as described above, at the expected value as well as 1/3 and 2/3 of it. This means that on approximately the 10th and 20th of each month we get a notification that our bill is at 1/3 or 2/3 of our expected monthly spend.

If either of those alerts is triggered sooner than that, it'd be a sign that monthly is on the rise.

Security

Security is a big deal, especially so in the cloud. Countless articles have been written about it and the advice here is just a couple quick points. At a later date I'd like to write a more detailed write up.

Multi-factor Authentication

Mutli-factor authentication, also known as two-factor authentication or 2FA, is an enhanced approach to authentication that requires combining multiple, separate, factors of authentication. Most commonly these are something you know (ex: a password) and something you have (ex: a hardware token or virtual token generator on your phone).

If one of these gets compromised (ex: someone steals your password), they would still need one of the other factors to login. A single factor alone isn't enough.

AWS supports multi-factor authentication using standard TOTP pin codes. It supports both free software pins (ex: Google Authenticator on your smart phone) and hardware tokens ($12.99 as of Jan, 2014).

*Do this immediately!

There is no reason not to have this enabled and I recommend immediately enabling it. In fact, you should enable 2FA on every service you use that supports it. If you're using Google Apps or even just regular Gmail, you should enable it there as well.

SSH

It's a good idea to use unique SSH keys for unrelated projects. It makes it much easier to deprovision them later on. You should generate a new pair of SSH keys (with a long passphrase) to use with your new servers. If you have multiple people sharing access to the same servers, each person should have their own unique SSH keys.

Rather than specifying your key files on the command line, you should add them to your ~/.ssh/config file so they'll be automatically used. q

Hostec2-10-20-30-40.compute-1.amazonaws.comUserubuntuIdentityFile"~/.ssh/my-ec2-private-key"

By default SSH will send all your available SSH keys in your keyring or listed in your config file to a remote server. If you have a lot of SSH keys then you may get an error from a remote server when you try to SSH to it:

Toomanyauthenticationfailuresforusername

From the remote server's perspective each is considered a connection attempt. If you have too many SSH keys for it to try then it may not get to the correct one. To force it to only send the SSH key specific to the server you are connecting to as listed in your config file, add the following to the top of your ~/.ssh/config:

Host*IdentitiesOnlyyes

This will force you to explicitly list the SSH key to use for all remote servers. If you want to restrict this to just a subset of them, you can replace the "*" in the Host section with a wildcard matching the DNS name of the servers. For example *.example.com.

VPC

Amazon Virtual Private Coud (VPC) is a networking feature of EC2 that allows you to define a private network for a group of servers. Using it greatly simplifies fencing off components of your infrastructure and minimizing the externally facing pieces.

The basic idea is to separate your infrastructure into two halves, a public half and a private half. The external endpoints for whatever you are creating goes in the public half. For a web application this would be your web server or load balancer.

Services that are only consumed internally such as databases or caching servers belong in the private half. Components in the private half are not directly accessible from the public internet.

This is a form of the principle of least privilege and it's a good idea to implement it. If your server infrastructure involves more than one server, then you probably should be using a VPC.

Bastion Host

To access internal components in the private half your VPC you'll need a bastion host. This is a dedicated server that will act as an SSH proxy to connect to your other internal components. It sits in the public half of your VPC.

Using a bastion host with a VPC greatly simplifies network security when working on AWS by significantly minimizing the number of external firewall rules you need to manage. Here's how to set one up:

Spin up a new server in the public half of your VPC Create a security group Bastion SG and assign it to the new server Edit the security groups for your private half servers to allow inbound access on port 22 (SSH) from Bastion SG Add your whitelisted IPs to the inbound ACL for Bastion SG (see the next section)

An SSH proxy server doesn't use that much CPU and practically zero disk. A m1.micro instance (the cheapest one that AWS offers) is more than enough for this. As of Jan 2014, at on-demand rates this comes out to ~$15/mo. With reserved instances you can bring this down to about $7/mo.

The added cost is nothing compared to the simplicity and security it adds to your overall environment.

Firewall Whitelists

Any server with port 22 (SSH) open to the public internet will get a lot of hacking attempts. Within 24 hours of turning on a such a server you should see a lot of entries in your SSH logs of bots trying to brute force log in. If you only allow SSH key based authentication this will be a pointless exercise but it's still annoying to see all the entries in the log (ie. it's extra noise).

One way to avoid this is to whitelist the IP addresses that can connect to your server. If you have a static IP address at your office or if it's "mostly static" (ex: most dynamic IPs for cable modems and DSL don't change very often), then you can set up the firewall rules for your servers to only allow inbound SSH access from those IPs. Other IP addresses will not even be able to tell there is an SSH server running. Port scanning for an SSH server will fail as the initial TCP socket will never get established.

Normally this would be a pain to manage on multiple servers but by using a bastion host this only needs to be done in one place. Later on if your IP address changes or you need to connect to your server from a new location (ex: on the road at a hotel), then just add your current IP address to the whitelist. When you're done, simply remove it from the list.

Email

Amazon Simple Email Service (SES) is Amazon's send-only email service for AWS. You can use it to send out email from your application or command line scripts. It includes both an AWS specific programmatic API as well as a standard STMP interface. Both cost the same (it's pay per use) but for portability purposes I recommend using the SMTP interface. This allows you to change your email provider down the road.

If you're only sending a handful emails then SES can be used free of charge. The first 2,000 emails per day are free when you send them from an EC2 server. For a lot of people this should be more than enough. After the first 2,000 per day it's $.10 per 1,000 emails plus outbound bandwidth costs.

Verification

When you first set it up you'll need to verify ownership for the email addresses that you'll be sending from. For example if you want to send email from hello@example.com then you'll need to prove you actually control example.com. This is done by either verifying that you can receive email at that address. You can also verify an entire domain by setting up TXT records to verify domain ownership.

You cannot send email until this verification is complete and it can take a little while for it to propagate through the system. Additionally, to prevent spammers from using SES for nefarious purposes, Amazon restricts your initial usage of SES. Your sending limit is gradually increased. If you plan on sending email from your application you should set up SES immediately so that it's available when you need it.

DKIM and SPF

To ensure that your emails are actually received by your recipients and not rejected by their spam filters you should set up both DKIM and SPF for your domain. Each is a way for recipients to verify that Amazon SES is a legitimate sender of email for your domain.

You can read more about setting up SPF on SES here.

You can read more about setting up DKIM on SES here.

Also, if you haven't already, you should set up DKIM and SPF for your domain's email servers as well. If you're using Google Apps for email hosting more details are available here.

Testing via Port 25

Once you have it set up, a simple way to test out both DKIM and SPF is using the email verification service provided by Port 25. Simply send them an email and a short while later they'll respond back with a report saying whether SPF and DKIM are properly configured. They'll also indicate whether their spam filters would flag your message as junk mail.

Note that you can also use Port 25 to verify your personal email address as well, not just Amazon SES. Just manually send an email to Port 25 and wait for the response.

EC2 (on the cheap)

Amazon EC2 allows you to spin up servers on demand and you only pay for what you usage, billed hourly. This means it's particularly catered towards usage patterns that involve scaling up to a large number of servers for a short period of time.

The flip side of the fine grained billing of EC2 is that the on-demand price of the servers is more expensive than other cloud providers. Here a couple tips to lower your costs.

Reserved Instances

If you are running a server that is always online you should look into reserved instances. With reserved instances you pay an upfront fee to in exchange for greatly reduced hourly rates. Amazon offers three levels of reserved instances, Light (the cheapest), Medium, and Heavy (most expensive), and each is offered in 1-year (cheaper) or 3-year blocks (more expensive).

Each costs progressively more, but also progressively reduces the hourly cost for running a server. For example a standard m1.small instance costs $.06/hour on-demand. If you buy a 3-year light reserved instance for $96 the hourly price drops to $.027/hour. Factoring in the up front cost and assuming the server runs 24-hours a day, it would take about 4 months to break even with the reserved instance vs paying on-demand rates.

Billing for Heavy instances is a bit different than Light or Medium as you'll be billed for the underlying instances regardless of whether it's actually running. This means that if you buy a 3-year heavy reserved instance you're agreeing to pay for running that instance 24-hours a day for the next three years.

The savings for reserved instances is anywhere from 25% to 65% over on-demand pricing. The break even point is anywhere from 3-months to a year. Once your AWS usage has stabilized it's well worth the time to investigate the cost savings of buying reserved instances.

If you're unsure of whether you'll be running the instance types a year or two from now I suggest sticking to Light or Medium instances. The savings differential is only another 10-15% but they allow you more cheaply change your infrastructure down the road.

Reserved Instance Marketplace

There's also a reserved instance marketplace where you can buy reserved instances from third parties or sell those that you no longer need. Since reserved instances only impact billing there's no difference in functionality in buying them via the third party marketplace. In fact, it's all part of the same UI in the admin console.

The only real difference with third party reserved instances is the duration of the reservation can be just about anything. This can be very useful if you anticipate a usage term besides 1-year or 3-years. Just make sure to check the actual price as there usually a couple listed that have wildly inflated prices compared to the Amazon offered ones.

Spot Instances

Spot instances allow you to bid on excess EC2 capacity. As long as your bid price is above the current spot price, your instance will continue to run and you'll pay the lower of the two per hour. If the spot price increases beyond your bid, your instance may be terminated. This termination could happen at anytime.

As they can be terminated at any time, spot instances work best when application state and results are stored outside the instance itself. Idempotent operations and reproducible or parallelizable work is usually a great candidate for spot instances. One of the most popular use cases for it is for running continuous integration (CI) servers. If the CI server crashes and restarts an hour later, it's not that big of a deal. The only result we really care about is if the final application build/test was successful.

More complicated setups are possible with spot instances as well. If you fully automate their provisioning (ie. automate security groups, user data, startup scripts), you can even have them automatically register themselves via Route53 DNS and then join an existing load balancer. In a later article I'll be writing about how to build such a setup that cheaply scales out a stateless web service via a phalanx of spot instances, all the while being fault tolerant to their random termination.

S3

S3 is Amazon's object store. It allows you to store arbitrary objects (up to 5TB in size) and access them over HTTP or HTTPS. It is said to be designed to provide 99.999999999% durability and 99.99% availability. For what it offers, S3 is really cheap. Amazon also periodically drops the prices for it as well. Most recently a week ago.

The S3 API allows you to create signed URLs that provide fine grained access to S3 resources with custom expirations. For example you can have an object stored in S3 that is not publicly readable to be temporarily accessible via a signed URL. This is a great way to provide user's access to objects stored in private S3 buckets from webapps.

GPG

If you're using S3 to store sensitive data (ex: database backups) then you should encrypt the data before you upload it to S3. That way only encrypted data is stored on Amazon's servers.

The easiest way to do this is using GPG. Generate a GPG key pair for your backup server on your local machine and add the public key of the key pair to the server. Then use that public key to encrypt any data before uploading to S3.

# Set the bucket/path on S3 where we will put the backup:$ S3_PATH="s3://my-s3-bucket/path/to/backup/backup-$(date +%Y%M%d)"# Temp file for encryption:$ GPG_TEMP_FILE=$(mktemp)# Encrypt it with GPG:$ gpg --recipient aws.backup@example.com --output "$GPG_TEMP_FILE" --encrypt mydata.foo# Upload it via s3cmd:$ s3cmd put "$GPG_TEMP_FILE""$S3_PATH"# Clean up temp file:$ rm "$GPG_TEMP_FILE"

The only downside to encrypting data prior to storage on S3 is that you will need to decrypt it to read it. You can't provide the S3 URL to someone else to download the data (ex: a direct link to a user's content that you're storing on their behalf) as they will will not be able to decrypt it. For backups this is the right approach as only you (or your company, your team, ...) should be able to read your data.

The S3 CLI tool s3cmd includes an option to specify a GPG key. If set, then it will automatically encrypt at objects you PUT to S3. It can be a convenient option as you only need to set it one place (the s3cmd config file). I prefer explicitly adding the GPG steps to my backup scripts though.

Encryption At Rest

S3 also supports what is referred to as Server Side Encryption . With this option enabled, Amazon will encrypt your data before persisting it to disk, and will transparently decrypt it prior to serving it back to a valid S3 request. The documentation for it describes how they keep the decryption keys separate from the S3 API servers and request them on demand.

Since Amazon can still read your data I don't consider this to be that useful of a feature. If you want to ensure that nobody else can read your data then you need to do the encryption. If you trust a third party to do it, then by definition that third party is able to read your unencrypted data.

Still though, it doesn't hurt to enable it either. Apparently there is no real performance penalty for enabling it.

Object Expiration

Once you start using S3 for backups you'll notice that your S3 bill will grow fairly linearly (or faster!). By default, S3 persists objects forever so you'll need to take some extra steps to clean things up.

The obvious solution is to delete objects as they get old. If you're using S3 for automated backups then you can have your backup script delete older entries. With additional logic you can have your backup scripts keep various ages of backups (ex: monthly for a year, weekly for a month, daily for a week).

A simpler, albeit coarser, approach is to use S3 object expirations. It allows you to define a maximum age for S3 objects. Once that age is reached, the object will automatically be deleted with no user interaction. You could set a 6-month expiration on your S3 backup bucket and it will automatically delete older entries than that.

Warning:If you use S3 object expiration make sure that your backups are actually working. It will delete your old objects regardless of whether your latest backups are actually valid. Make sure to test your backups regularly!

Object expiration is also a simple way to delete all the objects in an S3 bucket. From the AWS S3 console simply set the expiration for the entire bucket to be 1-day. The objects will then be automatically deleted after a day. S3 expiration is asynchronous so you may still see the objects in S3 for a short while after 24-hours but you will not be billed for them.

Glacier

Glacier is Amazon's long term, persistent storage service. It's about an order of magnitude cheaper than S3 but with much slower access times (on the order of multiple hours).

Rather than deleting old S3 objects you can configure S3 to automatically expire them to Glacier. The storage cost will be about 10x less and if you really need the data (ex: you accidentally destroyed all your S3 backups) you can slowly restore them from Glacier.

Final Thoughts

This post ended up longer than originally planned, but it's still a very incomplete list. There are plenty of other tips, tricks, and techniques to use with AWS, this is just a (hopefully helpful) start.

Do you have something to add to this list or a better way of solving some of these problems? Tell me about it.

#SOTU2014: See the State of The Union address minute by minute on Twitter

SymPy Gamma

Mobile Customer Communications: In-App Live Chat | In-App Helpdesk | In-App Messaging

Product Hunt

1/9998 - Wolfram|Alpha

The World's Worst Penetration Test Report by #ScumbagPenTester

$
0
0

Comments:" The World's Worst Penetration Test Report by #ScumbagPenTester "

URL:http://it.toolbox.com/blogs/securitymonkey/the-worlds-worst-penetration-test-report-by-scumbagpentester-58747


Many of you may have seen my recent Twitter rant about a penetration test report that I was reviewing for a peer of mine.  I shared some of the tidbits that made me most angry - angry to the point of wanting to flip over my desk and walk away from the collection of nonsensical words glaring from my screen.

 

I promised @gattaca a full write up of this travesty.  I am a monkey of my word.  Here it is, in all its glory.

 

The day started off as any normal day in my world of information security madness.  Several large vessels of Starbucks coffee consumed, sexy security problems squashed under my finger, and the usual assortment of interesting questions posed to me by my wonderful blog readers (chiefmonkey AT gmail.com). 

 

And then I got THAT e-mail.

 

It was a cry of help from a peer of mine who is the CISO at a pretty large firm in the northeastern part of the United States.  I've known this guy for years.  Sharp dude.  Also likes Starbucks.  We get along.

 

Snippets from his e-mail:

... I finally received my funding for this year's penetration test, and was pretty excited to get things going as usual.  Our compliance monkey has been on my tail about using a different testing firm every other year, so I asked around and got the name of this place "Scumbag Pentesting, Inc." [ed: I'm protecting these folks - NDAs and all].  Their chief pen testing monkey couldn't get into the USA for whatever reason, so he managed the test from India.  The on-site guys were local and pretty efficient.  The testing was typical, but I noticed that the chief pentesting guy was never on the briefing calls and let the local project manager do all the talking.  Weird, but nothing I wasn't willing to look over.   They took about a week to collect all the data and ship it off to India where the chief guy wrote up the report and then sent it to me via secure e-mail.  I'm not going to spoil it for you Chief.  Since we have a mutual NDA I've attached the report in a secure link below.  Call me.  ... CISOguy  

"Not going to spoil it for you." he says.  Oh boy.

 

I'm naturally paranoid, so I spun up a VM and opened the report (I have to admit I wasn't sure if the surprise was a PDF virus or something else - yes my friends like to prank me from time to time).

 

What I saw before me made me... angry.  Angry because I can't believe that someone that calls themself a "chief pentester" would write something like this.  I'm angry because the industry hasn't weeded these people out yet.  I'm angry because there are some CISOs out there that might be silly enough to make risk-based decisions based on a report that looked like this. 

 

Firms hire reputable penetration testing firms to perform testing against their assets because it's supposed to be an important tool to help them understand major risks and vulnerabilities that may be present - it should help them understand the risks, measure and prioritize remediation efforts, and help firms craft a comprehensive plan to reduce risk going forward.  It's a point-in-time snapshot - it's an educational opportunity.  But not this time.

 

Nobody wins here - except the tester who collected the check.

 

This report had three components: an executive summary, a list of findings, and three pages detailing how Scumbag Pentesting can swoop in and fix all the issue and make them go away.  I threw up right about here.

 

The executive summary was a mish-mash of copy-paste from several other penetration test reports.  You can easily tell this by several "obvious" key things:

  • They mention another customer's name by accident.
  • The voice of the writing switches back and forth between first and third person.
  • The summary mentions findings that aren't in the findings listing.
  • The paragraphs of the executive summary are in different fonts.

 

Have you ever been to a foreign country and ate at a restaurant where the menu is pages and pages of items written in a language that you can't read or understand?  Then you've seen the findings section of this report.  This section contained goodies like:

 

While no issues were found on this portion of the network, issues may or may not exist until they are found.

 

Yes folks, they have discovered Shrödinger's Vulnerability.

 

There could be ramifications of the test on this database server that have serious ramifications.

 

Serious ramificaitons are serious, man.

 

Enumeration of network was not done due to no remaining hours but all target hosts were targeted.

 

To quote @krangarajan: "That sounds like consultant speak for 'I did F**k all'"

 

It is recommended that customer remediate all findings based on prioritization in this report.

 

All of the findings in this report are marked HIGH.  Apparently running FTP on port 21 rates a HIGH with a recommendation of "run FTP on different port like 2121".

 

Scope of test was to conduct using white box methodology.  Testers conducted DMZ test using black box.  

That makes sense.  The customer wants you to do an authenticated test.  Screw that.  METASPLOIT it.

 

All password hashes for application APPLICATION were discovered and reversed revealing weak hash cryptography.  

This would almost be a good finding, except the field tester's notes in the appendix that clearly state that application APPLICATION didn't use any hashing and was storing passwords in clear text.  This guy was so good he hashed them, cracked them, and then unhashed them?

 

Application APPLICATION2 susceptible to XSS attack in login control page.  Code responsible located as "..." and was fixed to reduce production exposure of customer.

 

Hey, who needs a software development lifecycle?  He finds, he fixes - I wonder if he checked it into TFS too?

 

Use of NMAP enumeration tool crashed SERVER5 cluster.  After repeated times of customer putting SERVER5 cluster back online tool crashed again so further enumeration done with Nessus.  

I kicked my trashcan across the room.

 

MySQL configured to allow connections from 127.0.0.1.  Recommend configuration change to not allow remote connections.  

I used to put stuff like this in pen tests to see if my boss was paying attention.

 

Fixing the configuration will no longer allow evil connections by evil connection for configuration of server.  

Evil connections are connecting to evil connections to configure my server.  Are they evil connections with laser beams on their head?

 

Microsoft IIS susceptible to CVE-XXXXXXXX.  Recommend applying accordingly patch.  

Another almost good finding - but according to the appendix, this host is a RHEL 5.x box.  Those sysadmins - finding ways to run IIS on linux!! Brilliant!

 

The three pages at the bottom of the report contained a line-by-line listing of all the findings mentioned in the second part of the report, with prices marked to fix each finding.  Yes folks, it's a menu-o-remediation.  This is either brilliant or so rediculous that it makes my eyes cross.  It contained wonderful little gems like this:

 

Patch IIS vulnerability CVE-XXXXXX        $500 (1 server) Configure PHP.INI with secure settings      $390 (1 server)

 

There's more here, but what I have presented to you is an adequate summary of this nightmare.  I called my CISO friend, and after he was done laughing at my astonishment, he told me that he had no intention on paying them a single dollar and was handing the matter over to their legal/contracts area.  Phew.

 

I hope this post is a call to arms to both potential penetration test customers and information security professionals alike:

 

  • Do substantial research on any firm that you hire or recommend.
  • Always get a sample of work that is properly obfuscated that clearly highlights methodology and practices.
  • Insist on penetration test team leadership.  If you're the customer and the penetration test team's champion at your organization, demand accountability and an ethical approach to providing you with a sound and useful product.
  • For my fellow professionals out there - let's weed these flim-flam artists out of business.  While it's nearly impossible to share work like this without breaking the legal chains that bind us, I encourage back-channel communications on these matters.  Think of it as a reputation-based confidence control for quality.
  • Continue to be innovative, and raise the bar on product quality.  As your team gets better, it pushes competitive teams to get better as well. 
  • Use time-tried methodologies for your tests - don't make it up as you go along.  If you've never heard of OWASP, you're probably in the wrong business.

 

Cheers, and let's all fight to make sure that the ramifications of ramifications that may or may not exist don't affect your configuration files.  (It hurts just to read that, doesn't it?)

 

Chief

 

PS - If you have a horror story, please share!  You can use the email address above or post a comment below.

 


Strapfork | Easy Custom Bootstrap

DataHand - Wikipedia, the free encyclopedia

$
0
0

Comments:"DataHand - Wikipedia, the free encyclopedia"

URL:http://en.wikipedia.org/wiki/DataHand


The DataHand keyboard was introduced in 1995 by DataHand Systems, Inc. It was invented by Dale J. Retter and was produced by Industrial Innovations as early as 1992. The keyboard consists of two completely separate "keyboards", one for the left hand and one for the right, that are molded to rest your hands on. This allows you to place each hand wherever it is most comfortable to you. Each finger activates five buttons, the four compass directions as well as down. The thumbs also have five buttons as well, one inside and two outside as well as up and down. The button modules in which the fingers rest are adjustable to best fit the user's hands - each side can be independently moved up and down, towards the palm or further away.

This ergonomic layout allows for all typing to occur without any wrist motion, as well as without any finger extension. The keyboard layout is initially similar to a QWERTY keyboard, but the middle two columns of keys (i.e. H,Y,G...) have been delegated to sideways finger movements, and all of the keys outside of the main three rows are accessed through two additional modes, including a mode for mousing. There are three primary modes all together: letters, number and symbols, and function / mouse mode. Some practice is required. However eventual typing speedups are possible.

Also of note is the button design - instead of being spring-loaded, the buttons are held in place with magnets and are activated using optical sensors. This was done in order to dramatically reduce the finger workload while optimizing tactile feedback.

This unconventional keyboard was seen in the Jodie Foster 1997 movie Contact as the pilot's controls for the futuristic space ship; and the spy movie Stormbreaker. The Industrial Innovations version was featured on the television series Mighty Morphin Power Rangers.

After the initial prototype was released in 1995, DataHand has released the Professional and Professional II with new bodies. The Professional II also has extended programming capabilities over the Professional, being able to record macros of keystrokes for convenient use.

DataHand Systems, Inc. announced in early 2008 that it was ceasing to market and sell its keyboards. The company web site states that due to supplier issues, the company will not sell the DataHand keyboard "until a new manufacturer can be identified." However, the company plans a final, limited production run to satisfy existing customers. In January 2009, the company's website started taking orders for a "limited number of new DataHand Pro II units".

Somewhere around January 2014, the datahand.com site went offline, and was no longer available.

See also[edit]

External links[edit]

Reviews of the DataHand Keyboard[edit]

The 2 Teenagers Who Run the Wildly Popular Twitter Feed @HistoryInPics - Alexis C. Madrigal - The Atlantic

$
0
0

Comments:"The 2 Teenagers Who Run the Wildly Popular Twitter Feed @HistoryInPics - Alexis C. Madrigal - The Atlantic"

URL:http://www.theatlantic.com/technology/archive/2014/01/the-2-teenagers-who-run-the-wildly-popular-twitter-feed-historyinpics/283291/?utm_content=buffer32d49&utm_source=twitter.com


Meet Xavier Di Petta and Kyle Cameron, ages 17 and 19, whose ability to build a massive audience from nothing may be unparalleled in media today.

There is a new ubiquitous media brand on Twitter.

No, I'm not talking about Pierre Omidyar's First Look Media or BuzzFeed or The Verge, or any other investor-backed startup. 

I'm talking about @HistoryInPics, which, as I discovered, is run by two teenagers: Xavier Di Petta, 17, who lives in a small Australian town two hours north of Melbourne, and Kyle Cameron, 19, a student in Hawaii.

They met hustling on YouTube when they were 13 and 15, respectively, and they've been doing social media things together (off and on) since. They've built YouTube accounts, making money off advertising. They created Facebook pages such as "Long romantic walks to the fridge," which garnered more than 10 million Likes, and sold them off. More recently, Di Petta's company, Swift Fox Labs, has hired a dozen employees, and can bring in, according to an Australian news story, 50,000 Australian dollars a month (or roughly 43,800 USD at current exchange rates). 

But @HistoryInPics may be the duo's biggest creation. In the last three months, this account, which tweets photographs of the past with one-line descriptions, has added more than 500,000 followers to bring their total to 890,000 followers. (The account was only established in July of 2013.) If the trend line continues, they'll hit a million followers next month.

The new account has gained this massive following without the official help of Twitter, which often sticks celebrity and media accounts on its recommended-follow list, inflating their numbers. 

As impressively, my analysis of 100 tweets from the account this week found that, on average, a @HistoryInPics tweet gets retweeted more than 1,600 times and favorited 1,800 times. 

For comparison, Vanity Fair's Twitter account—with 1.3 million followers—tends to get a dozen or two retweets and favorites on any given tweet. 

I've got about 140,000 followers and I've tweeted more than 30,000 times. I can't remember ever having a single tweet get retweeted or favorited as much as the average @HistoryInPics tweet.

Actual people seem to follow these accounts. A quick check on a tool that scans Twitter accounts for bot followers says that only five percent of @HistoryInPics' followers are bots. That's an incredible low number for such a large follower base. (For comparison, the tool found that 34 percent of my followers were bots.)

Qualitatively, looking through who follows @HistoryInPics, I don't see the telltale signs of bots. Famous people follow the accounts, too. Jack Dorsey famous. Kim Kardashian famous. (From his personal twitter account, @GirlsGoneKyle, Cameron recently posted a screenshot of a direct message that Kardashian sent to them.)

Even other media people—who, as we'll soon see, have big issues with the brand—just can't help themselves sometimes from sharing photos from the account, when the perfect image from the past crosses their feed at just the right moment. Who can resist Tupac Shakur on a stretcher, right after being shot, with a middle finger in the air? Or a World War I train taking soldiers through Flanders? Or Audrey Hepburn and Grace Kelley backstage at the Oscars? 

In other words, @HistoryInPics is a genuine phenomenon built entirely on Twitter.

But strangely, in a world where every social media user seems to be trying to drive attention to some website or project or media brand, @HistoryInPics doesn't list its creators nor does it even have a link in its bio. There are no attempts at making money off the obvious popularity of the account. On initial inspection, the account looks created, perhaps, by anonymous lovers of history.

But no, @HistoryInPics is the creation of two teenagers whose closest physical connection is that they both live near the Pacific Ocean. 

It's not just @HistoryInPics, either. They're also behind @EarthPix, which has similarly staggering stats, and several comedy accounts that they're in the process of selling that I agreed not to disclose. They've got at least five accounts with hundreds of thousands of followers and engagement metrics that any media company would kill for right now. 

How do they do it? Once they had one account with some followers, they used it to promote other ones that could capitalize on trends they saw in social-media sharing. "We normally identify trends (or create them haha). We then turn them into a Twitter account," Di Petta said in an IM conversation. "Share them on established pages, and after 50,000 - 100,000 followers they've gained enough momentum to become 'viral' without further promotion."

But I've seen many, many people try to do similar things, and very few people have had this level of success. They're like great DJs playing exactly the songs people want, but for photographs on Twitter.

To put it bluntly: Their work in building audiences from nothing might be unparalleled in media today. 

No less a curator than Xeni Jardin, a co-founder of BoingBoing, recently tweeted, "I love @HistoryInPics’ taste in material, but" she continued, "would it kill them to include credit for the great photographers who shot these iconic images?"

Which brings us to the problems.

The audiences that Di Petta and Cameron have built are created with the work of photographers who they don't pay or even credit. They don't provide sources for the photographs or the captions that accompany them. Sometimes they get stuff wrong and/or post copyrighted photographs. 

They are playing by rules that "old media" and most new media do not. To one way of thinking, they are cheating at the media game, and that's why they're winning. (Which they are.)

I interviewed Di Petta on Skype and got him to walk me through the details of building this little empire of Twitter accounts. As he openly talked through how he and Cameron had built the accounts, I asked him how he felt about criticism that they didn't source or pay for images. 

"The majority of the images are public domain haha," he responded.

So I said, great, let's look through the last five together. And not all of them were in the public domain. So, I said, "How do you think about the use of these images?"

"Photographers are welcome to file a complaint with Twitter, as long as they provide proof. Twitter contacts me and I'd be happy to remove it," he said. "I'm sure the majority of photographers would be glad to have their work seen by the massives."

I pressed him on this point. Shouldn't the onus be on him and Cameron to get those rights from the photographers they assume would be grateful?

"It would not be practical," he said. "The majority of the photographers are deceased. Or hard to find who took the images."

Then he said, "Look at Buzzfeed. Their business model is more or less using copyright images."

I said most people in the media don't appreciate Buzzfeed's interpretation of the fair use exemption from copyright law. "The photographers I know would want me to ask you if you see anything wrong with profiting from their work?" I asked him.

"That's an interesting point," Di Petta responded. "I feel like we're monetizing our traffic, but they would see it as we're monetizing their images."

"They would say, 'Without our images, you have no traffic,'" I said.

"They do have a point," he conceded. "But whether we use images X or Y, there will be traffic to the site. But I can see their point of view."

In this logic, Di Petta echoes the logic of all social media networks. 

Facebook, Twitter, and (especially) Pinterest all benefit from people sharing copyrighted images. Visual content—none of which the companies create themselves—drive almost all social media sites. And they pay for none of it. 

It might be easy to get mad at Di Petta and Cameron over how they're using Twitter to build such huge, dedicated audiences, but the people who are really profiting from the profusion of copyrighted image sharing are the big social networks, two of which are now publicly traded companies, and another with a private multi-billion dollar valuation. Who really deserves the scorn—the two best players in the game or the people who own the stadium?

When @EarthPix and @HistoryInPics hit a million followers on Twitter, Di Petta and Cameron are likely to launch a website, which would be the beginning of monetizing their traffic. 

Groundbreaking discovery could pave way for routine use of stem cells in medicine - The Times of India

$
0
0

Comments:"Groundbreaking discovery could pave way for routine use of stem cells in medicine - The Times of India"

URL:http://timesofindia.indiatimes.com/home/science/Groundbreaking-discovery-could-pave-way-for-routine-use-of-stem-cells-in-medicine/articleshow/29559517.cms


Scientists have created embryonic-like stem cells by simply bathing ordinary skin or blood cells in a weak acid solution for half an hour in an astonishing breakthrough that could allow doctors in the future to repair diseased tissue with a patient's own cells.

Researchers at the Riken Centre for Developmental Biology in Japan have announced the breakthrough in the journal Nature and it has been welcomed in Britain as an important step towards using stem cells routinely in medicine without the ethical or practical problems of creating human embryos or genetically modified cells.

Although the research was carried out on laboratory mice, scientists believe that the same approach should also work on human cells. It radically changes the way "pluripotent" stem cells - which can develop into any of the specialised tissues of the body - can be created from a patient's own cells as part of a "self-repair" kit.

"Once again Japanese scientists have unexpectedly rewritten the rules on making pluripotent cells from adult cells....that requires only transient exposure of adult cells to an acidic solution. How much easier can it possibly get?" said Professor Chris Mason, chair of regenerative medicine at University College London.

Two studies in Nature have shown that there is now a third way of producing pluripotent stem cells, other than creating embryos or inducing the changes by introducing new genes into a cell. The third way is by far the simplest of the three approaches, scientists said.

The scientists believe that the acidity of the solution created a "shock" that caused the blood cells of adult mice to revert to their original, embryonic-like state. From this pluripotent state, the newly created stem cells were cultured in specially prepared solutions of growth factors to develop into fully mature cells, including an entire foetus.

Professor Robin Lovell-Badge of the Medical Research Council's National Institute for Medical Research, said: "It is going to be a while before the nature of these cells are understood, and whether they might prove to be useful for developing therapies, but the really intriguing thing to discover will be the mechanism underlying how a low pH shock triggers reprogramming. And why it does not happen when we eat lemon or vinegar or drink cola?"

Edward Snowden nominated for Nobel peace prize | World news | theguardian.com

$
0
0

Comments:" Edward Snowden nominated for Nobel peace prize | World news | theguardian.com "

URL:http://www.theguardian.com/world/2014/jan/29/edward-snowden-nominated-nobel-peace-prize


Edward Snowden will be one of scores of names being considered by the Nobel prize committee. Photograph: The Guardian/AFP/Getty Images

Two Norwegian politicians say they have jointly nominated the former National Security Agency contractor Edward Snowden for the 2014 Nobel peace prize.

The Socialist Left party politicians Baard Vegar Solhjell, a former environment minister, and Snorre Valen said the public debate and policy changes in the wake of Snowden's whistleblowing had "contributed to a more stable and peaceful world order".

Being nominated means Snowden will be one of scores of names that the Nobel committee will consider for the prestigious award.

The five-member panel will not confirm who has been nominated but those who submit nominations sometimes make them public.

Nominators, including members of national parliaments and governments, university professors and previous laureates, must enter their submissions by 1 February.

The prize committee members can add their own candidates at their first meeting after that deadline.

Viewing all 9433 articles
Browse latest View live