Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Software Detection of Currency :: Projects :: Steven J. Murdoch

$
0
0

Comments:"Software Detection of Currency :: Projects :: Steven J. Murdoch"

URL:http://www.cl.cam.ac.uk/~sjm217/projects/currency/


Recent printers, scanners and image manipulation software identify images of currency, will not process the image and display an error message linking to www.rulesforuse.org. The detection algorithm is not disclosed, however it is possible to test sample images as to whether they are identified as currency. This webpage shows an initial analysis of the algorithm's properties, based on results from the automated generation and testing of images.

Currently only a few raw results are presented, but more details will be added at a later stage. You may also be interested in my IH04 talk on this subject given at the Information Hiding Workshop 2004 rump session. There are more details on the detection mechanism in my 21C3 talk at the 21st Chaos Communication Congress.

Eurion Constellation

Initially it was thought that the "Eurion constellation" was used to identify banknotes in the newly deployed software based system, since this has been confirmed to be the technique used by colour photocopiers, and was both necessary and sufficient to prevent an item being duplicated using the photocopier tested. However further investigation showed that the detection performed by software is different from the system used in colour photocopiers, and the Eurion constellation is neither necessary nor sufficent, and in fact it probably is not even a factor.

UK Pounds

This image, from the 10 Pound note contains all the Eurion instances that are known of, but is not detected as currency:

However this image, which has one extra column of pixels at the right is detected as currency:

Even if the constellations are blanked out:

US Dollars

Similar results can also be seen with the new US $20, in which the "0"s of the yellow "20"s scattered in the background form Eurion constellations.

Both PSP and Photoshop identify this image (from an Adobe Forum discussion) as being currency and refuse to open it, despite there being no instance of the Eurion constellation:

Another interesting property of this image is that it seems to be a near minimal test case, since if any changes are made, then it is no longer detected as currency. Even cropping away the black border will result in it opening in PSP and Photoshop CS without any warning:

It is belived that some printers use histograms to detect currency. If this is the feature used by PSP/Photoshop then removing the black border could sufficiently change the histogram to circumvent the detection system. However, this image, which is a tiled version of the first image, is not detected as currency, despite having the same histogram:

Detection Code

The detection code of both PSP and Photoshop CS appears to be the same, due to them having the same edge cases. A static dissasembly reveals the string "DMRC" in symbols, the stock sticker code for Digimarc, a company specialising in watermarking. Subsequent news articles have confirmed that Digimarc developed the currency-detection code, on behalf of the Central Bank Counterfeit Deterrence Group (CBCDG), part of the G11 organisation). According to Adobe and Jasc, adding the detection code is "optional" and that they do not get access to the source code or details of the algorithm.

It seems that the algorithm is optimised to err on the side of false negatives, and I have not been able to generate a false positive. It has been reported that generating a false negative is easy, perhaps as simple as using images of two notes, side-by-side, rather than just one. A statement from Adobe said that the version included in Photoshop CS was not the first one proposed by the CBCDG, and previous revisions were rejected on the basis of having an unnaceptable false positive rate. Also, while dissasembly is difficult, it is likely to be comparatively easy to patch around the call to the detection code and bypass the detector.

Another observation is that unmodified banknotes images cause the error message to be displayed after a short time, also non-banknote images open quickly. However the edge cases made from modified banknote images take a few extra seconds before either opening or displaying the error message.

This would suggest that there is a series of tests, each of which provides a score on how much the image looks like a banknote, with the later ones which take longer to execute and presumably being more accurate. At each stage there could be is a upper and lower limit on the cumulative score, if the upper bound is reached the image is not opened and an error message displayed, if the lower bound is reached then the image is opened and no further tests are performed, otherwise the next test is executed.

Strongly Detected Regions

The image below shows the smallest square starting at each point on an English £20 note, which is still detected as currency. The starting points are generated at 500 pixel intervals and the size of each crop is calculated to a precision of 5 pixels. Starting points are marked by a green dot, end points by a red dot; the colour of the square is of no significance. Only the crops smaller than 700x700 have been drawn. The resulting image has been converted from PPM to PNG and scaled down from 3524x1906 to 800x433 for presentation.

The images below are full size versions of three smallest crops.

Benchmarking

The tests below show the smallest distortion required to prevent a section of a banknote (an English £20) from being detected as currency. For each test, the image on the left is the original and the image on the right shows the least distorted image which is no longer detected as currency. The attacks themselves were performed by Checkmark. The caption shows the Checkmark function applied, along with the resulting value of the parameter which was varied. Each image is unscaled but has been converted from PPM to PNG for presentation.

JPEG Compression

Median Filtering

Midpoint Filtering

Ratio Distortion (1 of 2)

Ratio Distortion (2 of 2)

Rotation

Rotation + Scaling

Resampling

Scaling (1 of 2)

Scaling (2 of 2)

Shearing

Trimmed Mean Filtering

Warping

Wavelet Compression

Wiener Noise Removal

Last modified 2009-10-12 16:23:14 +0100

[ Home ]


Federating Do-It-Yourself ISPs from around the world | Fédération FDN

$
0
0

Comments:"Federating Do-It-Yourself ISPs from around the world | Fédération FDN"

URL:http://www.ffdn.org/en/article/2014-01-03/federating-do-it-yourself-isps-around-world


Last December, the FDN Federation attended the 30th Chaos Communication Congress in Hamburg.

It was a very fruitful event, which allowed us to meet many DIY ISP initiatives or community networks from all around the world. We discussed and exchanged about our goals, our difficulties, and possible solutions. It was also an opportunity for every project to present their tools.

 

 

Y U NO ISP: taking back the net

Taziden gave a wonderful talk about the FFDN, our members, our motivations, and our core values: a neutral access to the Internet, together with a local infrastructure. The video is already available online.

While the talk was early in the morning, the room was completely packed! Drawing that much interest came as a delightful surprise. This shows that our message is relevant in our days of mass spying and unilateral control of the intrastructure.

A database of DIY ISPs from around the world

The congress was the opportunity to publicly launch our fantastic map of DIY ISPs: http://db.ffdn.org. It was mostly written by a member of CAFAI, part of the FFDN, and it's free software. Every ISP sharing our core values can add its information on the map, so that it's easier to know about all the wonderful DIY ISPs initiatives. Maybe there is even one in your city!

Less than a few hours after the tool was presented, we heartily noticed that several markers had been added to the map, including one in Africa! This is very rewarding, as one of our missions is to allow projects to know each other and communicate.

Workshop on DIY ISPs

Taziden's talk was followed by a workshop. About 30 people from various countries (Germany, Belgium, Canada, Austria, Slovenia and France) gathered around a table and discussed.

Each participant described its own project (or idea of a project). An interesting point was the diversity of approaches: neighborhood communities, student-run networks, community networks, new projects, defunct projects... We then exchanged on the difficullties we encountered while running our ISPs or community networks.

At the end of the debate, the conclusion was quite clear: DIY ISPs of the world, unite! More specifically, we need tools to:

get to know each other document technical resources (network, administrative procedures, etc) document legal considerations and possible legal issues

The FFDN already provides some tools: an international mailing list and, of course, our wonderful map.

To further strenghten the bonds between DIY ISPs, a wiki was launched in the wake of the workshop: http://www.diyisp.org. The content is still bare; we call for every DIY ISP around the world to share bits of their knowledge with the global community.

Let's all move forward together: there are many DIY ISPs out there, each with great ideas, tools and an eager determination to share!

StatusPage.io Blog - Growing From $5,000 to $25,000 in MRR

$
0
0

Comments:"StatusPage.io Blog - Growing From $5,000 to $25,000 in MRR"

URL:http://blog.statuspage.io/growing-from-5000-to-25000-in-mrr


Growing From $5,000 to $25,000 in MRR

Back in July we wrote a post on 5 steps to $5,000 in monthly recurring revenue (MRR) and promised we would write follow up posts for $25k, and $100k in MRR. We're excited to announce we've reached $25,000 per month! Here are 6 things that helped out.

1. Building product people want

After YC ended, we got right back to cranking out product. A few of the new features and enhancements included:

  • Custom HTML
  • Activity Log (pictured below)
  • HipChat Integration
  • Custom Metrics
  • Component Subscriptions (launching out of beta soon...)
  • Redesigned notifications

None of these features were built in a vacuum. Instead, we identified a group of potential customers that would find a new feature useful and stayed in contact with them from idea to launch. By the time a feature is launched, we would have 5-10 existing customers already using the feature.

What to take away from this

Startups are inherently underdogs. Everyone likes to root for an underdog - you just need to give your customers reasons to. Without a multi-million dollar marketing budget, the only ways to do this are to ship quality product fast and to provide incredible customer support. When you identify groups of customers that would find a new feature useful and involve them in product development (initial feature scope, usability tests...etc), they feel empowered to help you succeed, and in turn are usually the ones singing your praises to the other decision makers in their networks.

2. Riding coat-tails

If there’s any ‘growth hack’ that worked well for us, it’s been product integrations with successful companies. Early on, we looked at other companies with whom we shared mutual customers and with whom a product integration would be mutually beneficial. We built integrations with New Relic, Datadog, Pingdom, Librato, TempoDB, Heroku, and HipChat. Now, our signups love the fact that they can hook their status page up to their their existing tools with only a few clicks.

How it helped

StatusPage.io featured as a New Relic Connect Partner

Worked with HipChat Marketing to launch an integration on their blog

StatusPage.io featured in Heroku Add-ons store

What to take away from this

While you’re building an integration, approach the conversation with someone in BD or Marketing at the other company by asking yourself how the integration will help the company's customers. A few examples using the integrations above:

  • Our New Relic integration will let any New Relic customer power a status page using their very own New Relic metrics.
  • Our HipChat integration will let any HipChat customer pipe status page updates into a company wide HipChat room, eliminating the need for status emails and keeping the entire company on the same page during an incident.
  • Our Heroku integration will let any Heroku customer launch a status page in one click, without every having to leave the Heroku ecosystem.

Companies love when developers build on their APIs and will gladly help you get the word out if you build something their customers will value without them having to do any of the leg work.

3. Increasing In-app Referrals

At the bottom of every status page, we include a small, “Powered by StatusPage.io” link. While we felt uneasy about this at first, one of our mentors encouraged us to include the link and it has worked incredibly well. It turns out that our customers don’t mind displaying the link, essentially letting their customers know that the status page is hosted outside of their own infrastructure.

How it helped

One-third of new signups and customers originate from our existing customers' status pages.

What to take away from this

There’s no black and white answer here, but it seems like SaaS customers have become more willing to display subtle branding. Focus on building a product that your customers love and there’s a good chance they’ll want to help you spread the word.

4. Building a brand by writing targeted, interesting content

One path SaaS companies take to begin building a brand is with solid content. Our brand is not just about downtime communication. It’s about the three of us and what we’ve learned along the way. It’s about working as a distributed team. It’s about creating a product that solves a specific pain point and delivering on needed features. It’s about customer support and being human.

How it helped

Content can lead to immediate spikes (our top posts garnered 50,000 and 25,000 visits), but the payoff is usually in the long-tail. During every conversation with prospective customers, we always ask the question “How did you hear about us”? Many responses go something like this - “I think I may have seen a blog post of yours,” or “You guys wrote that post on reaching $5k in MRR, right?,” and “I honestly can’t remember, but I may have seen a post of yours or one of your customer's status pages” Ironically, it seems like the less people actually remember how they heard about you, the better job you’ve done at content marketing.

What to take away from this

There’s no hack to writing actionable, interesting content. When writing a new post that we plan to submit to Hacker News, we constantly ask ourselves, “Is this actually interesting?”, “Will this actually teach something new or offer a new perspective on a topic?” We usually re-write a blog post 2 to 3 times before it goes live.

Also, when people first hear about your product, they're most likely not ready to buy, let alone sign up. An active blog lets you stay top of mind for when the time is right. In our case, this means we need to establish mindshare before a company has an outage.

5. Increasing ARPU

If you have a product that people want, you’ll also have a constant backlog of features that customers ask for. The backlog means you’re doing something right, but it also means you need to figure out how to spend your time and how to prioritize product development. One question we ask ourselves is, “Will feature x help our core customer base”? If so, “Which customers specifically do we think will use this feature or upgrade for this feature and most imporantly, why?”

For example, from Day 1, an enterprise customer of ours asked for a specific feature called Component Subscriptions. We punted on the feature at first since they were the only ones to ask and it really wasn’t needed by the majority of our customer base. Over time, however, we began to hear a substantial amount of interest both from the enterprise company and from other core customers leading us to move forward with the feature. See below for a quick breakdown on our initial projections.

How it helped

We’ve been able to increase Average Revenue Per User from $54/month/customer to $71/month/customer and we’ve upgraded 57 customers over the past 6 months by being judicious with product development and building features for our core customer base. We like to keep an eye on these numbers using the graphs from Baremetrics.io below.

What to take away from this

It’s easy to be a ‘yes’ person and acquiesce to all customer requests. It’s hard to get to the core of why a customer requested a specific feature. Is it because their boss asked them to request it? Is it because they’re confused on how to use your product? Or is it actually because they need the feature?

If you’re new to product development, try going through the five whys when talking to a customer about a request to get to the core of the issue.

You’ll know if you’ve made the right call on building a feature if you’re able to either a) get more signups to hit an activated threshold or b) get core customers to upgrade for that feature.

6. Limiting Churn

Every SaaS company knows that in order to be successful, you need to keep churn low. We’re no different. Our churn in terms of number of canceled vs active accounts per month is 2.52%. In terms of canceled revenue versus active revenue, churn is closer to 1%.

What to take away from this

The reasons for churn will always be a mix of pricing objections, product objections, poor customer support, and competitive products. To adequately fix churn problems, you need to figure out why customers churn in the first place. For us, the vast majority of our churned customers initially signed up for our lowest tier $19/month account and didn’t end up using the product or we were too expensive. Overall, most of these customers were not a good fit from the get-go so we’re fairly satisfied with our churn levels.

What's next for $25k - $100k?

  • Testing out a variety of paid marketing channels
  • Staying in touch with companies that don't convert after 2 weeks with retargeting, better drip campaigns, and individual follow-ups.
  • Hiring one or two more developers to ship more product faster
  • Expanding our core product offering
  • Hiring for an entry level sales/account management position

Thanks to everyone who has helped us reach this milestone and stay tuned for more!

Discuss this on Hacker News

Get Notified Of New Posts

Read your Damn Standard Libraries | Coding for Interviews

$
0
0

Comments:"Read your Damn Standard Libraries | Coding for Interviews"

URL:http://blog.codingforinterviews.com/reading-code-standard-libraries/


A Coding for Interviews weekly practice problem group member mentioned during our Skype customer interview that reading through the Java collections library was the most valuable step he took while preparing for his Google interviews. In addition to getting a better understanding the standard data structures, hearing a candidate say “well the Java collections library uses this strategy…” is a strong positive signal.

Indeed, other HNers echoed the sentiment:

Wael Khobalatte writes (link added),

I usually have a fun time going from method to method just exploring any of the implementations in that library. Josh Bloch (One of the writers of the library I think) uses a lot of examples from the Java collections in his book “Effective Java“, which I also recommend to everyone.

And dev_jim provides a great suggestion,

It helped me when I was a bit rusty with data structures and was looking for a new job recently. And if you can try and re-implement the data structures in a different way. For example, the JDK HashMap uses chaining so try and build an open addressing one. Not only does it teach you about data structures themselves but it gets you practicing coding very quickly for these toy coding problems that get thrown at you during interviews.

Reading the standard libraries of your favorite programming languages is really worth it. Give it a shot. Here’s why.

Reading standard libraries can not only show you what is available, it often teaches you how to write Good Code™ in your language as well.

  • Do you include newlines after blocks?
  • How are variables, classes and methods normally named?
  • What state is kept and why?
  • How and when do enums, structs and plain old classes get used?

As a Ruby programmer, reading the standard libraries of Ruby and Ruby on Rails has been especially enlightening.

Writing Rails for the first time, it feels like you are writing pseudo-code.

Helpful::Application.routes.draw do
 use_doorkeeper do
 controllers :applications => 'oauth/applications'
 end
 if Rails.env.development?
 require 'sidekiq/web'
 mount Sidekiq::Web, at: "/sidekiq"
 end
 root to: 'pages#home'
end
  • Where is this function root defined?
  • What makes the use_doorkeeper code’s call to controller different than another?
  • From another example: why and how does User.find_by_favorite_food('pizza') have the variable name food embedded inside of the method name?

Beginners will make mistakes that seem bizarre for people who have programmed before—calling methods with the names of methods or classes, declaring symbols instead of making function calls, forgetting to pass a block to something that requires it.

Once you begin diving in to the internals of what makes Rails (and Ruby itself) tick, it becomes clear where these top-level-seeming functions are going and how methods can apparate out of thin air.

For Rails specifically:

Reading through the libraries (and even just reading through the full API docs) will expose you to a number of utility functions and data structure parameters you might not have known existed. Did you know you could specify the load factor for a Java hashtable?

A lot of times standard libraries contain the types of data structures and algorithms you seldom encounter when you are programming at the application level. Often times, then, when you step into an interview and they ask you how you might implement a hash table, you panic. Whenever you wanted to use a hash table, you would reach for HashMap<K,V>.

Reading the low level constructs of your language will help you prepare for your interviews and learn how some of the more insteresting things in computer science are implemented in the real, road-tested systems.

Give it a shot in your language of choice!

  • Java: GrepCode is a nice viewer for the standard Java libraries
  • Python: pure python libraries are available in /Lib (web viewer), and C modules are in /Modules (web viewer)
  • Ruby: Qwandry is a neat tool for popping open Ruby and Ruby on Rails libraries with just one shell command. You can also read the Ruby source on GitHub. /lib (on GitHub) is a good place to start.
  • C++: libstdc++ STL implementation has a number of interesting libraries (web viewer, click on “Go to the source code of this file” to view sources)

What is Coding for Interviews?

Each week, Coding for Interviews members receive two things:

A programming interview question A distilled computer science topic review

You send in your answer and the next week we review solutions.

We practice a little bit each week. The idea is, the next time our group members are looking for jobs, we will be prepared.

New group members are always welcome!

One email each week. No spam. Easy unsubscribe.

IBM 704 Computer | Lee Jennings – Amateur Radio ZL2AL

$
0
0

Comments:"IBM 704 Computer | Lee Jennings – Amateur Radio ZL2AL"

URL:http://www.zl2al.com/blog/category/ibm-704-computer/


Magnetic Core Stacks at the Left, Mainframe in the Centre.
727 Tape Drives with the 407 Line Printer at the right

Sometimes truth is stranger than fiction. After leaving college in Toronto I went to work for IBM Canada as an engineer. After a 6 months course at the Don Mills IBM plant in Toronto I left for Ottawa for a 1 year stint servicing  the punched card accounting machines that analysed and crunched the data for the 1958 Canadian Census. Punched cards generated big money for IBM. It was the dawn of the computer era and after another few  months of training during 1959 I was assigned to the new IBM 704 scientific computer based at the A V Roe plant on the outskirts of Toronto along with 2 other technicians. The A.V. Roe Aircraft Company of Canada was building the Avro Arrow and the 704 was to be used for design work. The aircraft was a revolutionary jet interceptor, designed and built by Canadians. The delta winged Arrow was a plane of firsts: fly by wire, computer control, and integral missile system and capable of MACH 2+. The COLD WAR with the Soviet block was raging.

That’s me at the console in the photograph above.  I was 23 at the time.

IBM manufactured and sold 136 704s. This one was the only one that ever made it to Canada. The usual method of input to the system was magnetic tape, but entry could also be gained from punched cards through the card reader or from the operator’s console, if special instructions were required. All information, whether part of the data to be processed or part of the program of instructions, started out on punched cards. Then, it either could be converted directly to magnetic tape before being read into the system, or it could be read directly.

At the start of each procedure, the program of instructions was read into the memory from tape or cards and was stored there for use with each record processed. Usually the records to be processed would already have been converted from punched cards to tape or would be on a tape that was the output of an earlier processing operation.

The results of the processing were either produced on a line printer, on a magnetic tape, or on punched cards. If the operator did not want to tie up the entire system while the relatively slow printing or punching was accomplished, he or she could produce an output tape, then connect a tape unit directly to the printer or card punch and print out or punch the results without using the principal components of the system.

It had some massive computing power for it’s time which included 2 x 512 k magnetic core memory stacks, 6 high speed tape readers, punched card input reader and the huge mechanical line form printer on the right of the photo. The tape handlers at the back in the picture were IBM 727s, among the first drives using vacuum columns and tapes made from plastic which was coated with an iron oxide for magnetic storage. These tape drives and, more importantly, the tapes themselves stimulated the acceptance of half-inch-wide, 7-track tape, with recording densities of 200 cpi, and speeds of approximately 60 inches per second (ips) as an industry standard. Additional capacities were added over the years: 556, and 800 cpi, and speeds of 120 ips or more. Each 36-bit computer instruction contained 1 or 2 address fields of 15 bits, so that the full 32K, 36-bit word memory could be addressed directly. The two fields were called decrement (an index) and address, and those names live on in the LISP commands CDR and CAR.

The main console is shown below.

The 704 Main Console

The IBM 704 became the dominant vacuum-tube logic computer in the late 1950′s. A 32K, 36-bit word memory was the largest available. The console is shown below and you could read what was in the memory stacks from the neon bulbs above. You  could also input a binary word via the switches below. It was programmed in Fortran language and had a 36 bit binary word as it’s data on the wire buses. Each 36-bit computer instruction contained 1 or 2 address fields of 15 bits, so that the full 32K, 36-bit word memory could be addressed directly. The two fields were called decrement (an index) and address, and those names live on in the LISP commands CDR and CAR.

IBM stated that the device was capable of executing up to 40,000 instructions per second.

The programming languages FORTRAN and LISP were first developed for the 704, as was MUSIC, the first computer music program by Max Mathews.

In 1962 physicist John Larry Kelly, Jr created one of the most famous moments in the history of Bell Labs by using an IBM 704 computer to synthesize speech. Kelly’s voice recorder synthesizer vocoder recreated the song Daisy Bell, with musical accompaniment from Max MathewsArthur C. Clarke of 2001: A Space Odyssey fame was coincidentally visiting friend and colleague John Pierce at the Bell Labs Murray Hill facility at the time of this remarkable speech synthesis demonstration and was so impressed that he used it in the climactic scene of his novel and screenplay for 2001: A Space Odyssey,[2] where the HAL 9000 computer sings the same song.

The magnetic but because of the cost of memory 8K and 16K systems were common. It was the first IBM machine to use core memory. The memory cycle time was 12 microseconds and an integer addition in its registers of 36 bits required two cycles, one for the instruction fetch and on for the data fetch. Floating point operations required on the order of 10 such cycles.

During their tenure, memory capacity grew as IBM learned how to make core memory. Initially, the 704 had 4K, then 8K then 32K words. A word was 36 bits. The 704 was a nightmare to work on. It had 4000 tubes and an 8 Mhz clock with 117 miles of wiring. The tubes were mostly 12AT7s on a “pluggable unit” with 8 tubes on it.

The 8 tube Pluggable unit with “real” resistors and capacitors.
This unit represented 1 “bit” in a 36 bit word in one register

The 4000 tubes ran on on +150 Volt and -150 Volt power supplies with 12 VAC for the tube filaments. Needless to say the machine generated enormous amounts of heat and required 32 tons of air conditioning to keep it a constant temperature. As the engineering technicians, it was our job to keep the machine running. We used to bring up power and get the machine up to temperature each morning at 7 Am and then run a “Round Robin” program which checked out every possible function in Fortran. While it was doing it’s run, our job was to vary the DC voltages until it failed and note where it failed. We then had to return the  machine to normal operation and have it ready for an 8 Am customer start. During the day, we poured over manuals and diagrams trying to figure out which tube (or tubes) was failing to cause the problem. At 5 Pm or the next morning we would replace a pluggable unit and check to see whether the problem went away. When it didn’t go away and got worse each morning we all knew that panic was not far away and IBM management took a dim view of the machine not working as it cost them a lot of revenue every hour it was off line.

I was “married” to this 704 computer and was on call any time if required, along with 2 other technicians from 1959 to 1962. When AV Roe in Malton Ontario closed down the Avro Arrow production because of the decision to “can” the Arrow by the Canadian Diefenbaker government we dismantled the whole machine and re-assembled it at the IBM Data Processing Center on Eglinton Avenue in Toronto where we set it up again. It then operated as a data cruncher for an oil company and a University and other commercial customers. I understood that IBM rented it out at thousands of dollars an hour. Demands for computing were growing rapidly and the world was well over tubes and into transistors. Solid state was the word! I started to work on solid state equipment and then decided to leave IBM after 7 years to work for Canadian Motorola. I didn’t realize it at the time, but it was the greatest privilege to be able to be there at the dawn of the computer revolution. Yes, Bill Gates and Steve Jobs did some cool stuff in the 1980s and the urban myth is that it was the beginning. It wasn’t!

IBM was the beginning and for 20 years they dominated the industry with the 700 series, 650s, 305 Ramacs and other commercial computers. Sadly, they never saw Bill Gates and his kind coming and IBM lost it’s whole business base to silicon valley upstarts.

How to use Photoshop to design interfaces | Nathan Barry

$
0
0

Comments:"How to use Photoshop to design interfaces | Nathan Barry"

URL:http://nathanbarry.com/how-to-use-photoshop-to-design-interfaces


You care about designing good software. How do you do that? You can read books about user experience and design, but at some point you have to actually start designing the interface. More than just how the software or website functions and how you interact with it, but what it actually looks like.

There are plenty of tools. Many people even design directly in the browser. But the vast majority of professional designers use Adobe Photoshop. Now with Adobe’s new Creative Cloud pricing you can get the entire Creative Suite for $50/month or less (what you’d pay for any SaaS app). So you’ve selected Photoshop, it’s cheaper than ever to buy, but… how do you use it?

Searching for Photoshop tutorials will teach you how to turn photos black and white, remove red-eye, and improve your photography workflow. But, wait… You aren’t a photographer. You just care about designing great software.

It turns out Photoshop is used by several distinctly different groups of people including photographers and interface designers. Unfortunately, all the beginner tutorials are for photographers. I want to help you skip past all the tools you won’t use and jump straight into the tools and techniques actually used to design interfaces.

Ready?

Layers and Groups

You can’t use Photoshop effectively without understanding layers. Only beginners put multiple design elements on the same layer. Here’s how layers work.

In school, did you ever see those anatomy textbooks that used different layers of transparencies so that you could flip through different parts of the human body? Each page would have just a few parts on it, but they would all stack together to form the complete image. This means you could look at everything all at once or focus in on a particular system of the body.

Layers in Photoshop are just like that. Any interface will be made up of different elements like buttons, headers, and labels. But to make sure each one is easy to edit later (this is very important!) each element is on its own layer.

In the layers palette (on the right side of your Photoshop interface by default) that document looks like this:

Just like with the transparencies in an anatomy textbook, the stack order of your layers is very important. Anything on the topmost layer will cover up content on the layers beneath it. But see that white and grey checkerboard pattern? That implies transparency. Anywhere a layer is transparent the content below it will show through into the final image.

You can draw on any layer at anytime. Just select the layer you want to draw on in the layers palette and start creating. But if your new content isn’t appearing on your canvas, make sure the area you are drawing on isn’t being covered by another higher layer.

Groups

Layers in Photoshop can get insanely messy. It’s normal for a interface design to have a few hundred layers—if not a few thousand! To tame this potential disaster you need to do two things:

Name your layers. Seriously. If you plan to ever use this PSD again, name each layer as you create it (I actually failed to do that in the above screenshot. Oops!). Your life will be easier. Organize your layers with groups. Using groups (and naming them) is even more important than naming individual layers.

As you finish a part of your design you want to put those elements in a group. So if I have a button, two labels, and a text field I could place them all in a group called “Form.” I can then collapse this group by clicking the triangle so it stops cluttering up my layers palette and I can move on to the next part of the design.

Creating groups

You can create groups two ways:

Click the “Create a new group” icon at the bottom of the layers palette (shown below). Or press Cmd + G.

If you opt for the second method you can select as many layers as you want, press Cmd + G, then they will all be instantly added to your new group.

Groups can also be reordered just like layers and even nested inside other groups. All rules about the stacking order and visibility still apply.

Filtering

Even with named layers and nicely organized groups you can still get lost in a complicated document. Another solution just added in Photoshop CC, is filtering your layers. You can filter based on kind, mode, effect, etc. But most commonly, I filter by name and just enter part of the layer name I want.

Type

All interfaces include text. In fact, you can have almost exclusively text and still have a good design. So as a new interface designer it is worth your time to master the type tool in Photoshop. To start, I want you to remember two letters: T and V.

Pressing T will switch to the type tool—officially it’s the “Horizontal type tool” but you rarely use the vertical option. And pressing V will switch to the move tool. This is important because you will often press T, enter some text, then want to move it around by switching quickly between these two tools.

With the type tool selected (press T) click anywhere on your canvas. Here’s what I want you to notice.

 

First at the top, notice the cancel circle and the confirm checkmark. Those are to confirm or cancel any current edits. While you are actively editing text you are stuck doing that until you deal with your edits. You can confirm by clicking the checkbox or pressing Cmd + Enter. If you just press Enter your text cursor will just move to the next line.

Next on the right side, I want you to notice that the type tool created a new type layer for you. It’s easy to overdo this and accidentally leave a mess of untitled type layers behind. Keeping an eye on your layers palette will make your life much easier.

Finally, in the center, I want you to notice the Character palette. Here you can change the font family (I have Lucida Grande at the moment), font style, kerning, leading (line-height in CSS), and more. The other commonly used type features when designing interfaces is the All Caps button (they can look so good…) and the Underline text button (denoting links).

Paragraphs

Now I mentioned that pressing Enter will move to the next line—just as you’d expect in any text tool—but you don’t use that often. If you want multi-line text, it is best to create a paragraph instead. How? That’s easy. Just click and drag when you create your text layer.

You have a new layer, just like before, but this one has a predefined area for the text to go.

 

 

I also now have the paragraph tab open instead of the character tab. With it I can change alignment and indentations. If you are designing for the web you will also want to turn off the Hyphenate checkbox as you can’t hyphenate text (where part of a long word breaks to the next line with a hyphen) in a browser (at least not easily).

So whether you are creating a paragraph of text to represent a blog post, or a single word to be a button label, the type tool is critical to master.

Shapes and Paths

Next you want to draw a button. Why? Because all interfaces have buttons. So what tool do you use? The marqee tool?

Nope. That’s just used for quick and dirty work (or used by complete beginners).

Instead you want to draw your new shape with a vector drawing tool.

I guess we should first talk about what vector and raster. Vector objects can be scaled infinitely without losing their crispness. Raster objects on the other hand are pixel based, so if you scale them up they will quickly become pixelated and blurry. Photoshop is a raster drawing program (pixel based), but it does have some vector tools (if you want pure vector, use Adobe Illustrator). You should use those vector tools whenever possible.

So instead of drawing a button with the marquee tool you want to use the shape tool (press U). Now there are several different shape tools: rectangle, rounded rectangle, ellipse, polygon, line, and custom. You can press Shift + U to toggle through them. Select the rounded rectangle tool.

Note in the top bar that you can set the radius. That refers to the number of pixels of the curve on each corner—set it to 5px (our button is going to have rounded corners). Now when you click and drag you will be drawing a rectangle with 5px rounded corners. Make it long and wide and you will have made a button!

 

That created a shape layer for you automatically (always watch the layers palette). The shape in a shape layer is defined by a path. Paths are vector based (so our shape is also) and can be modified at anytime—which makes them awesome. Now look at the little squares surrounding the button—those are anchor points for the path. Paths are the most flexible method of creating objects or tracing photos—spend as much time learning to use them as possible!

Now take a look at the shape properties panel that is front and center. Here you can change the corner radius at anytime (new in Photoshop CC) as well as change the fill (I set it to green) or stroke colors.

Adding text

An unlabeled button isn’t very easy to use, so we need to add some text.  Switch to your text tool, type out a label, and press Cmd + Enter. Next set the text alignment to center and use the move tool (V) to drag it into place. Once you set the text to all-caps and the color to white, you have a nice looking button!

 

 

 

 

 

 

Now, if you pressed V and it didn’t switch to the move tool, that’s probably because you are currently editing text (meaning you didn’t press Cmd + Enter). So instead of using a keyboard shortcut, you just typed the letter “v” on your canvas.

The pen tool

Since you will be using shapes so much, it is worth your time to learn the pen tool to edit those shapes. Many consider the pen tool to be an intermediate or advanced part of Photoshop, but I think anyone can learn it with a little practice. It really is critical to know it well in order to design software in Photoshop.

Layer styles

Last, but certainly not least, are layer styles. Layer styles can bring flat, boring designs to life. Let’s take that button, wouldn’t it be great to make it look three dimensional, but without having to add any more layers?

Starting with a gradient

If you double-click on a layer in the layers palette you will open the layer style dialog. Then select the gradient overlay checkbox (and tab). Set the blend mode to “Overlay”, opacity to 20% and leave the rest set to the defaults.

 

Adding a stroke

Now that the button has a slight gradient (which is starting to give it a 3D look) we can add more details. Next add a stroke. Set the position to “Inside”, size to 1px, and choose a color a little darker than the darkest color in your gradient.

 

Finishing with an inner shadow

Finally we want a highlight along the top edge to make the button look more crisp. Oddly enough we do this with an inner shadow. Set the shadow color to pure white (#ffffff), opacity to about 40% (play with this), blend mode to “Overlay”, the distance to 2px (1px will be covered by the inside stroke), and the angle to 90 degrees.

 

 

That’s just getting started with what is possible when you use layer styles. Once you add drop shadows, pattern overlays, and glows the possibilities increase dramatically! Here’s an example of what can be created on a single layer with just layer styles.

 

Pretty crazy!

Continue learning

To become an expert learn more about layers, type, shapes, paths, and layer styles. Those are the basic tools used by professional designers to design great looking websites and applications.

If you want all the training you need in one place, take a look at my new course, Photoshop for Interface Design. I teach exactly what you need to know to design interfaces and skip over the parts that only photographers and graphic designers would use. In other words, you learn the skills you need so much faster.

The course will be out in beta on January 14th. Sign up for the email list to make sure you don’t miss it!

2.x-vs-3.x-survey - Python Wiki

$
0
0

Comments:"2.x-vs-3.x-survey - Python Wiki"

URL:https://wiki.python.org/moin/2.x-vs-3.x-survey


At the end of 2013/beginning of 2014, a survey was conducted to see how much 3.x is catching on.

The survey itself is here. Note that recent takers of the survey will not see their results in the PDF below.

The survey was publicized via posts to comp.lang.python, python-dev and hacker news.

You can find the results here in PDF format: 2013-2014 Python 2.x-3.x survey.pdf.

Results as a single chart, unfortunately you need to mouse over the labels to see the full question text.

CategoryImplementations

Academics Against Mass Surveillance


Fired? Speak No Evil - NYTimes.com

$
0
0

Comments:"Fired? Speak No Evil - NYTimes.com"

URL:http://www.nytimes.com/2014/01/03/opinion/fired-speak-no-evil.html


I WAS fired the other day. I believe the preferred phrase is “terminated,” which is how my former employer, Byliner, a digital publishing company and subscription service in San Francisco, put it. I was informed that our “burn rate” was too high, that we needed to show investors that we were serious about reducing it, and that while my loss of a job was unfortunately going to be a part of that reduction, this had nothing to do with the quality of my work. Soon thereafter, an email arrived from the company’s founder and chief executive saying how much he had enjoyed working with me.

Around the same time, a termination agreement pinged into my inbox. Much of it set forth standard-issue language resolving such matters as date of termination, the vesting of options, the release of all claims against the company, and the return of company property. I think I get to keep last year’s Christmas gift of an iPad, and the previous year’s bottle of wine has long been drunk, but I must send back any company files in my possession. So far, so good.

What brings me up short is clause No. 12: No Disparagement. “You agree,” it reads, “that you will never make any negative or disparaging statements (orally or in writing) about the Company or its stockholders, directors, officers, employees, products, services or business practices, except as required by law.” If I don’t agree to this nondisparagement clause, I will not receive my severance — in this case, the equivalent of two weeks of pay. Two weeks? Must be hard times out in San Francisco, or otherwise why the dirt parachute — and by the way, is that the sort of remark I won’t be allowed to make if I sign clause No. 12?

I would prefer not to, as Bartleby the Scrivener put it so succinctly in Herman Melville’s classic tale of bureaucratic resistance. When I shared that inclination with one of my superiors at Byliner, the news traveled up the chain of command. And I was soon informed that the president wished to assure me that there is nothing unusual about such clauses, that media people like herself sign them all the time, and that Byliner might even agree to a mutual nondisparagement clause. That means that if I don’t say anything mean about the company, its representatives won’t say anything unkind about me.

You might say, what’s the big deal? Sign the damn thing already! And indeed, it’s true, as Byliner implied, that nondisparagement clauses prohibiting individuals from saying or writing anything that might have a negative effect on a corporation are increasingly common — used in most settlement agreements and about a quarter of executive employment agreements.

To make the choice of nondisparagement even easier for a prospective signee, there is no established body of law precisely defining “disparagement.” Several state laws suggest that for former employees to violate a nondisparagement contract, their statements must be not only disparaging but also untrue. This means that it would probably be hard for a company to prove disparagement in the courts.

So if nondisparagement agreements are downright ordinary and at the same time difficult to enforce, why not sign and take the severance?

Because as quaint as this may seem, giving up the right to speak and write freely, even if that means speaking or writing negatively, strikes me as the unholiest of deals for a writer and an editor to accept. Though such clauses don’t technically violate the First Amendment — I’d be explicitly agreeing to forfeit my right to speak freely if I signed clause No. 12 — such a contract has a paralyzing effect on the dissemination of the truth, with all of truth’s caustically cleansing powers. To disparage is but one tool in a writer’s kit, but it’s an essential one. That a company would offer money for my silence, which is what this boils down to — well, I’ve seen many a mob movie about exactly that exchange.

The increased prevalence of nondisparagement agreements is part of a corporate culture of risk management that would have us say nothing if we can’t say anything nice. And yet it occurs to me that if a company isn’t strong enough to be reproached, then it simply isn’t strong enough, period.

Mind you, I’m not looking to disparage Byliner. The company has made a few mistakes in my view (firing me perhaps being a relatively minor one), but what fledgling enterprise does not screw up from time to time during its shakedown phase? It’s not that I necessarily want to disparage, but I want the freedom to do so, to be able to criticize, to attack, to carp, to excoriate, if need be. I want to tell the truth, even if it isn’t pretty.

That’s why I won’t sign clause No. 12. Byliner can keep the money. I’ll keep my self-respect.

Will Blythe, former editor at large for Byliner, is the author of “To Hate Like This Is to Be Happy Forever.”

pksunkara/alpaca · GitHub

$
0
0

Comments:"pksunkara/alpaca · GitHub"

URL:https://github.com/pksunkara/alpaca


alpaca

Api Libraries Powered And Created by Alpaca

Tired of maintaining API libraries in different languages for your website API? This is for you

You have an API for your website but no API libraries for whatever reason? This is for you

You are planning to build an API for your website and develop API libraries? This is for you

You define your API according to the format given below, alpaca builds the API libraries along with their documentation. All you have to do is publishing them to their respective package managers.

Join us at gitter if you need any help. Or at #alpaca on freenode IRC.

Installation

You can download the binaries

Or by using deb packages

Or by using golang

# Clone the project into your golang workspace$ git clone git://github.com/pksunkara/alpaca# Compile templates$ go get github.com/jteeuwen/go-bindata$ cd alpaca && ./make# Install the program$ go get$ go install github.com/pksunkara/alpaca

Examples

You can find some api definitions in the examples directory. The api libraries generated are at https://github.com/alpaca-api

Completed api definitions are buffer.

Usage

The path here should be a directory with api.json, pkg.json, doc.json

pkg.json

All the following fields are required unless mentioned.

{"name":"Example",// Name of the api (also used as class name for the library)"package":"example-alpaca",// Name of the package"version":"0.1.0",// Version of the package"url":"https://exampleapp.com",// URL of the api"keywords":["alpaca","exampleapp","api"],// Keywords for the package"official":false,// Are the api libraries official?"author":{"name":"Pavan Kumar Sunkara",// Name of the package author"email":"pavan.sss1991@gmail.com",// Email of the package author"url":"http://github.com/pksunkara"// URL of the package author},"git":{// Used in the package definition"site":"github.com",// Name of the git website"user":"alpaca-api",// Username of the git website"name":"buffer"// Namespace of the git repositories},"license":"MIT",// License of the package"php":{// Required only if creating php api lib"vendor":"pksunkara"// Packagist vendor name for the package},"python":{// Required only if creating python api lib"license":"MIT License"// Classifier of the license used for the module}}

api.json

All the following fields are required unless mentioned.

{"base":"https://exampleapp.com",// Base URL of the api"version":"v1",// Default version for the api (https://api.example.com{/version}/users) [optional]"authorization":{// Authorization strategies"basic":true,// Basic authentication [optional] (default: false)"header":true,// Token in authorization header [optional] (default: false)"oauth":true// OAUTH authorization [optional] (default: false)},"request":{// Settings for requests to the api"formats":{// Format of the request body"default":"form",// Default format for the request body [optional] (default: raw)"json":true// Support json? [optional] (default: false)}},"response":{// Settings for responses from the api"formats":{// Format of the response body"default":"json",// Default response format. Used when 'suffix' is true [optional] (default: html) "json":true// Support json? [optional] (default: false)},"suffix":true// Should the urls be suffixed with response format? [optional] (default: false)},"error":{// Required if response format is 'json'"message":"error"// The field to be used from the response body for error message},"class":{// The classes for the api"users":{// Name of a class of the api"args":["login"],// Arguments required for the api class [optional]"profile":{// Name of a method of the api"path":"/users/:login/profile",// Url of the api method"method":"post",// HTTP method of the api method [optional] (default: get)"params":["bio"]// Arguments required for the api method [optional]}}}}

doc.json

The following is filled according to the entries in api.json

{"users":{// Name of a class of the api"title":"Users",// Title of the api class"desc":"Returns user api instance",// Description of the api class"args":[{"desc":"Username of the user",// Description of the argument"value":"pksunkara"// Value of the argument in docs}],"profile":{// Name of a method of the api"title":"Edit profile",// Title of the api method"desc":"Edit the user's profile",// Description of the api method"params":[{"desc":"Short bio in profile",// Description of the argument"value":"I am awesome!"// Value of the argument in docs}]}}}

Request formats

Supported request formats are raw, form, json.

The formats raw and form are always true.

Response formats

Supported response formats are html, json.

The format html is always true.

Authorization strategies

Supported are basic, header, oauth

Package Managers

Testing

Check here to learn about testing.

Contributors

Here is a list of Contributors

I accept pull requests and guarantee a reply back within a day

TODO

General
  • Convert make into Makefile
Responses
  • Add support for XML
  • Add support for CSV
Requests
  • HTTP Method Overloading
  • What about file uploads?
Api
  • Check returned status code
  • Special case for 204:true and 404:false
Libraries
  • Pagination support
  • Classes inside classes (so on..)
  • Validations for params/body in api methods
  • Allow customization of errors
  • Tests for libraries (lots and lots of tests)
Readme
  • Optional params available
  • Return types of api calls
Comments
  • The descriptions should be wrapped
  • Align @param descriptions
Languages
  • Support Java, Go, Perl, Clojure, Scala, Obj-C
  • Build API docs (Resulting in bloated definitions?)
  • Build cli tool for APIs (bash? python? go?)

License

MIT/X11

Bug Reports

Report here. Guaranteed reply within a day.

Contact

Pavan Kumar Sunkara (pavan.sss1991@gmail.com)

Follow me on github, twitter

Tal Bereznitskey - iOS 7 only is the only sane thing to do

$
0
0

Comments:" Tal Bereznitskey - iOS 7 only is the only sane thing to do "

URL:http://berzniz.com/post/72083083450/ios7-only-is-the-only-sane-thing-to-do


I’m porting an app that was originally targeting iOS 3 to an iOS 7 support only. Almost three years later, so much have changed.

When I started programming it, iOS 4 was pretty new and not everyone upgraded to it. You know, there wasn’t over-the-air upgrades and you had to use iTunes for upgrades. No wonder only geeks were upgrading.

We decided to support iOS 3. Here are some stuff that weren’t available back then:

Blocks. Can you imagine programming without using blocks? I can’t, but that’s what we did back then.

GCD. There was no Grand Central Dispatch. Doing background work wasn’t easy as it is today.

ARC. No automatic reference counting meant worrying a lot about leaks, crashes and retain cycles. With ARC, you mainly have to consider retain-cycles and choosing between weak and strong references.

UIAppearance. There was no way to customize the basic UIKit controls. If you wanted a UISwitch with a different color, you had to build a complete custom switch control by yourself. It’s not just UIAppearance, it’s more that Apple understood that apps needs a unique look.

The list goes on and on. These are the things that I added to the app as time went by. When iOS 5 came around, we dropped iOS 3 support, introduced blocks, GCD and ARC into the app. When iOS 6 came out, iOS 4 support was abandoned and we could more easily customize the look and feel of the app.

Our current app is supporting iOS 6 and iOS 7 at the same time. This is horrible. We can’t leverage what iOS 7 has to offer and a lot of the UI is compromise. Going iOS 7 only is the only sane thing to do.

As we’re going for an iOS 7 only release here are some things I’m glad for:

Pull To Refresh. When we added pull to refresh to our app there weren’t many apps using this technique and it added a premium feel to our app. Our custom built pull to refresh control is no longer needed as Apple added UIRefreshControl a long time ago.

UIViewController Containment APIs. It’s so easy to keep sanity with UIViewControllers and the containment APIs allow creating a smart and scalable view hierarchy.

Custom Tab Bar. To customize it, we had to build our own UITabViewController subclass. Who needs it now. Deleted.

Custom Navigation bar. We wanted a background image for it, this wasn’t possible back then. Also gone.

HTML Strings. I remember that for adding an underline to some part of a UILabel, you had to split it into 3 parts. Not anymore, Attributes strings are so easy to create these days, you can even use HTML to create them.

@2x only. The era of retina only devices is here. Goodbye half of the images.

Flat out. A lot of retina images are also on their way out since most of the design gone flat. Almost no need for images

viewDidUnload. It’s not being called anymore by the OS. Wow, a lot of code is going to the trash!

AutoLayout. Not sure what I think of this yet. One thing I do know is if you’d like to achieve a html/css like rendering of UIViews then this is the way to go. Instead of calculating UILabel heights, let iOS do it for you.

UIDynamics. We’re surely not going to use this too much, but when physics are needed, we’ll surely use it.

Receding keyboard like the messages app. This was one feature we really wanted and implementing this was painful as hell. I can’t believe it’s now just a boolean value away.

There are so many options to make the app much better with iOS 7: Better handling of push notifications, background fetches, custom transitions, multipeer connectivity and so much more. iOS 7 also keeps us focused on the content and not the chrome, this is the ultimate key strength of iOS 7.

The iOS 3 app pushed what’s possible to the edge and beyond, I’m hoping to do the same for iOS 7.

Breaking down cancer’s defence mechanisms | University of Cambridge

$
0
0

Comments:"Breaking down cancer’s defence mechanisms | University of Cambridge"

URL:http://www.cam.ac.uk/research/news/breaking-down-cancers-defence-mechanisms


A possible new method for treating pancreatic cancer which enables the body’s immune system to attack and kill cancer cells has been developed by researchers.

The method uses a drug which breaks down the protective barrier surrounding pancreatic cancer tumours, enabling cancer-attacking T cells to get through. The drug is used in combination with an antibody that blocks a second target, which improves the activity of these T cells.

Initial tests of the combined treatment, carried out by researchers at the University’s Cancer Research UK Cambridge Institute, resulted in almost complete elimination of cancer cells in one week. The findings, reported in the journal PNAS, mark the first time this has been achieved in any pancreatic cancer model. In addition to pancreatic cancer, the approach could potentially be used in other types of solid tumour cancers.

Pancreatic cancer is the fifth most common cause of cancer-related death in the UK and the eighth most common worldwide. It affects men and women equally, and is more common in people over the age of 60.

As it has very few symptoms in its early stages, pancreatic cancer is usually only diagnosed once it is relatively advanced, and prognosis is poor: for all stages combined, the one and five-year survival rates are less than 20% and less than 4% respectively. Tumour removal is the most effective treatment, but it is suitable for just one in five patients.

Immunotherapy – stimulating the immune system to attack cancer cells – is a promising therapy for several types of solid tumours, but patients with pancreatic cancer have not responded to this approach, perhaps because the human form of the cancer, as in animal models, also creates a protective barrier around itself.

The research, led by Professor Douglas Fearon, determined that this barrier is created by a chemokine protein, CXCL12, which is produced by a specialised kind of connective tissue cell, called a carcinoma-associated fibroblast, or CAF. The CXCL12 protein then coats the cancer cells where it acts as a biological shield that keeps T cells away. The effect of the shield was overcome by using a drug that specifically prevents the T cells from interacting with CXCL12.

“We observed that T cells were absent from the part of the tumour containing the cancer cells that were coated with chemokine, and the principal source of the chemokine was the CAFs,” said Professor Fearon. “Interestingly, depleting the CAFs from the pancreatic cancer had a similar effect of allowing immune control of the tumour growth.”

The drug used by the researchers was AMD3100, also known as Plerixafor, which blocks CXCR4, the receptor on the T cells for CXCL12, enabling T cells to reach and kill the cancer cells in pancreatic cancer models. When used in combination with anti-PD-L1, an immunotherapeutic antibody which enhances the activation of the T cells, the number of cancer cells and the volume of the tumour were greatly diminished. Following combined treatment for one week, the residual tumour was composed only of premalignant cells and inflammatory cells.

“By enabling the body to use its own defences to attack cancer, this approach has the potential to greatly improve treatment of solid tumours,” said Professor Fearon.

The research was supported by GlaxoSmithKline, the Medical Research Council, Addenbrooke’s Charitable Trust, the Ludwig Institute for Cancer Research, the Anthony Cerami and Anne Dunne Foundation for World Health, and Cancer Research UK.

For more information, please contact Sarah Collins on sarah.collins@admin.cam.ac.uk

ICANN's new rules for domain registrants require you to verify your contact details - iwantmyname Domain Blog

$
0
0

Comments:"ICANN's new rules for domain registrants require you to verify your contact details - iwantmyname Domain Blog"

URL:https://iwantmyname.com/blog/2014/01/icanns-new-rules-for-domain-registrants-require-you-to-verify-your-contact-details.html


Rules. You can't live with them, you can't have a domain without them. And starting January 2014, there are some important new rules to be aware of. These rules are set by the internet's governing body ICANN and affect all generic top-level domains (gTLDs) such as .COM, .NET, .ORG, .BIZ, .INFO or .NAME as well as upcoming new gTLDs.

Domains will be disabled, if you don't verify them

Starting next week, when you register a domain with new contact details, transfer existing domains or change the registrant (owner) or email address on your contact information, you will receive an additional email with an activation link. If you don't press this link, your domain will be disabled after 15 days.

You need to verify your contact details if you:

  • purchase a domain with iwantmyname for the first time
  • have updated your iwantmyname account details
  • transferred a domain to iwantmyname and changed the contacts

I repeat, if you don't react to these emails, new ICANN policies mandate that we will have to disable your domain name. Even if you're in Antarctica and have no internet access. Even if your spam folder eats it alive.

Specifically, if your 15 day verification window passes, your website will be replaced by a website displaying a verification error, along with instructions on how to fix it. But on the plus side, it can be fixed. And if you do renew it on time, this domain email verification will only happen once.

So what should you do now?

As soon as you are able, please ensure that your domain name is associated with a valid email address. The last thing we want is for you to make a change, then have your verification email sent to an old address.

Also, if you purchase a domain or make a change after January 2014 and don't receive an email, or if you just have questions about the new ICANN rules, please get in touch.

David Cameron's internet porn filter is the start of censorship creep | Laurie Penny | Comment is free | The Guardian

$
0
0

Comments:" David Cameron's internet porn filter is the start of censorship creep | Laurie Penny | Comment is free | The Guardian "

URL:http://www.theguardian.com/commentisfree/2014/jan/03/david-cameron-internet-porn-filter-censorship-creep


'The worst thing about the porn filter is not that it accidentally blocks useful information but that it blocks information at all.' Illustration: Satoshi Kambayashi

Picture the scene. You're pottering about on the internet, perhaps idly looking up cake recipes, or videos of puppies learning to howl. Then the phone rings. It's your internet service provider. Actually, it's a nice lady in a telesales warehouse somewhere, employed on behalf of your service provider; let's call her Linda. Linda is calling because, thanks to David Cameron's "porn filter", you now have an "unavoidable choice", as one of 20 million British households with a broadband connection, over whether to opt in to view certain content. Linda wants to know – do you want to be able to see hardcore pornography?

How about information on illegal drugs? Or gay sex, or abortion? Your call may be recorded for training and monitoring purposes. How about obscene and tasteless material? Would you like to see that? Speak up, Linda can't hear you.

The government's filter, which comes into full effect this month after a year of lobbying, will block far more than dirty pictures. That was always the intention, and in recent weeks it has become clear that the mission creep of internet censorship is even creepier than campaigners had feared. In the name of protecting children from a rotten tide of raunchy videos, a terrifying precedent is being set for state control of the digital commons.

Pious arguments about protecting innocence are invariably marshalled in the service of public ignorance. When the first opt-in filtering began, it was discovered that non-pornographic "gay and lesbian" sites and "sex education" content would be blocked by BT. After an outcry, the company quickly changed the wording on its website, but it is not clear that more than the wording has been changed. The internet is a lifeline for young LGBT people looking for information and support – and parents are now able to stop them finding that support at the click of a mouse.

Sexual control and social control are usually co-occurring. Sites that were found to be inaccessible when the new filtering system was launched last year included in some cases helplines like Childline and the NSPCC, domestic violence and suicide prevention services – and the thought of what an unscrupulous parent or abusive spouse could do with the ability to block such sites is chilling. The head of TalkTalk, one of Britain's biggest internet providers, claimed that the internet has no "social or moral framework". Well, neither does a library. Nobody would dream of insisting a local book exchange deployed morality robots to protect children from discovering something their parents might not want them to see. Online, that's just what's happening, except that in this case, every person who uses the internet is being treated like a child.

Every argument we have heard from politicians in favour of this internet filter has been about pornography, and its harmful effect on young people, evidence of which, despite years of public pearl-clutching, remains scant. It is curious, then, that so many categories included in BT's list of blocked content appear to be neither pornographic nor directly related to young children.

The category of "obscene content", for instance, which is blocked even on the lowest setting of BT's opt-in filtering system, covers "sites with information about illegal manipulation of electronic devices [and] distribution of software" – in other words, filesharing and music downloads, debate over which has been going on in parliament for years. It looks as if that debate has just been bypassed entirely, by way of scare stories about five-year-olds and fisting videos. Whatever your opinion on downloading music and cartoons for free, doing so is neither obscene nor pornographic.

Cameron's porn filter looks less like an attempt to protect kids than a convenient way to block a lot of content the British government doesn't want its citizens to see, with no public consultation whatsoever.

The worst thing about the porn filter, though, is not that it accidentally blocks a lot of useful information but that it blocks information at all. With minimal argument, a Conservative-led government has given private firms permission to decide what websites we may and may not access. This sets a precedent for state censorship on an enormous scale – all outsourced to the private sector, of course, so that the coalition does not have to hold up its hands to direct responsibility for shutting down freedom of speech.

More worrying still is the inclusion of material relating to "extremism", however the state and its proxies are choosing to define that term. Bearing in mind that simple protest groups like tax justice organisation UK Uncut have been labelled extremist by some, there is every chance that the categories for what constitutes "inappropriate" online content will be conveniently broad – and there's always room to extend them. The public gets no say over what political content will now be blocked, just as we had no say over whether we wanted such content blocked at all.

Records of opt-in software will, furthermore, make it simpler for national and international surveillance programmes to track who is looking at what sort of website. Just because they can doesn't mean they will, of course, but seven months of revelations about the extent of data capturing by GCHQ and the NSA – including the collection of information on the porn habits of political actors in order to discredit them– does make for reasonable suspicion. Do you still feel comfortable about ticking that box that says you want to see "obscene and tasteless content"? Are you sure?

The question of who should be allowed to access what information has become a defining cultural debate of the age. Following the Edward Snowden revelations, that question will be asked of all of us in 2014, and we must understand attempts by any state to place blocks and filters on online content in that context.

Policies designed for controlling adults have long been implemented in the name of protecting children, but if we really want to give children their best chance, we can start by denying private companies and conservative politicians the power to determine the minutiae of what they may and may not know. Instant access to centuries of information and learning is a provision without peer in the history of human civilisation. For the sake of the generations to come, we must protect it.

How misaligning data can increase performance 12x by reducing cache misses - Dan Luu

$
0
0

Comments:"How misaligning data can increase performance 12x by reducing cache misses - Dan Luu"

URL:http://danluu.com/3c-conflict/


Here’s the graph of a toy benchmark1 of page-aligned vs. mis-aligned accesses; it shows a ratio of performance between the two at different working set sizes. If this benchmark seems contrived, it actually comes from a real world example of the disastrous performance implications of using nice power of 2 alignment, or page aligment in an actual system2.

Except for very small working sets (1-8), the unaligned version is noticeably faster than the page-aligned version, and there’s a large region up to a working set size of 512 where the ratio in performance is somewhat stable, but moreso on our Sandy Bridge chip than our Westmere chip.

To understand what’s going on here, we have to look at how caches organize data. By way of analogy, consider a 1,000 car parking garage that has 10,000 permits. With a direct mapped scheme (which you could call 1-way associative3), each of the ten permits that has the same 3 least significant digits would be assigned the same spot, i.e., permits 0618, 1618, 2618, and so on, are only allowed to park in spot 618. If you show up at your spot and someone else is in it, you kick them out and they have to drive back home. The next time they get called in to work, they have to drive all the way back to the parking garage.

Instead, if each car’s permit allows it to park in a set that has ten possible spaces, we’ll call that a 10-way set associative scheme, which gives us 100 sets of ten spots. Each set is now defined by the last 2 significant digits instead of the last 3. For example, with permit 2618, you can park in any spot from the set {018, 118, 218, …, 918}. If all of them are full, you kick out one unlucky occupant and take their spot, as before.

Let’s move out of analogy land and back to our benchmark. The main differences are that there isn’t just one garage-cache, but a hierarchy of them, from the L14, which is the smallest (and hence, fastest) to the L2 and L3. Each seat in a car corresponds to an address. On x86, each addresses points to a particular byte. In the Sandy Bridge chip we’re running on, we’ve got a 32kB L1 cache with 64-byte line size and, 64 sets, with 8-way set associativity. In our analogy, a line size of 64 would correspond to a car with 64 seats. We always transfer things in 64-byte chunks and the bottom log₂(64) = 6 bits of an address refer to a particular byte offset in a cache line. The next log₂(64) = 6 bits determine which set an address falls into5. Each of those sets can contain 8 different things, so we have 64 sets * 8 lines/set * 64 bytes/line = 32kB. If we use the cache optimally, we can store 32,768 items. But, since we’re accessing things that are page (4k) aligned, we effectively lose the bottom log₂(4k) = 12 bits, which means that every access falls into the same set, and we can only loop through 8 things before our working set is too large to fit in the L1! But if we’d misaligned our data to different cache lines, we’d be able to use 8 * 64 = 512 locations effectively.

Similarly, our chip has a 512 set L2 cache, of which 8 sets are useful for our page aligned accesses, and a 12288 set L3 cache, of which 192 sets are useful for page aligned accesses, giving us 8 sets * 8 lines / set = 64 and 192 sets * 8 lines / set = 1536 useful cache lines, respectively. For data that’s misaligned by a cache line, we have an extra 6 bits of useful address, which means that our L2 cache now has 32,768 useful locations.

In the Sandy Bridge graph above, there’s a region of stable relative performance between 64 and 512, as the page-aligned version version is running out of the L3 cache and the unaligned version is running out of the L1. When we pass a working set of 512, the relative ratio gets better for the aligned version because it’s now an L2 access vs. an L3 access. Our graph for Westmere looks a bit different because its L3 is only 3072 sets, which means that the aligned version can only stay in the L3 up to a working set size of 384. After that, we can see the terrible performance we get from spilling into main memory, which explains why the two graphs differ in shape above 384.

For a visualization of this, you can think of a 32 bit pointer looking like this to our L1 and L2 caches:

TTTT TTTT TTTT TTTT TTTT SSSS SSXX XXXX

TTTT TTTT TTTT TTTT TSSS SSSS SSXX XXXX

The bottom 6 bits are ignored, the next bits determine which set we fall into, and the top bits are a tag that let us know what’s actually in that set. Note that page aligning things, i.e., setting the address to

???? ???? ???? ???? ???? 0000 0000 0000

was just done for convienence in our benchmark. Not only will aligning to any large power of 2 cause a problem, generating addresses with a power of 2 offset from each other will cause the same problem.

Nowadays, the importance of caches is well understood enough that, when I’m asked to look at a cache related performance bug, it’s usually due to the kind of thing we just talked about: conflict misses that prevent us from using our full cache effectively6. This isn’t the only way for that to happen — bank conflicts and and false dependencies are also common problems, but I’ll leave those for another blog post.

Well, I’ve now managed to blog about two of the areas where I have the biggest comparative advantage. Three or four more blog posts and I’ll be able to write myself straight out of my job. I must be moving up in the world, because I was able to automate myself out of my first job with a couple shell scripts and a bit of Ruby. At least this requires some human intervention.


Coingen

$
0
0

Comments:"Coingen"

URL:http://coingen.io


Coin Name (one word, case is ignored)
Coin Abbreviation (exactly three letters, eg BTC)
Coin Icon (256x256)

Remove Coingen branding on splash screen (0.10 BTC)

Include source (+0.05 BTC)

Do not display my coin on the public status page (I understand that if I lose my private link, I will lose access to my coin).

BBC News - Intermittent fasting: The good things it did to my body

$
0
0

Comments:"BBC News - Intermittent fasting: The good things it did to my body"

URL:http://www.bbc.co.uk/news/magazine-25549805


2 January 2014Last updated at 19:23 ETBy Peter BowesBBC News, Los Angeles

Many of the changes in my body when I took part in the clinical trial of an intermittent fasting diet, were no surprise. Eating very little for five days each month, I lost weight, and I felt hungry. I also felt more alert a lot of the time, though I tired easily. But there were other effects too that were possibly more important.

During each five-day fasting cycle, when I ate about a quarter the average person's diet, I lost between 2kg and 4kg (4.4-8.8lbs) but before the next cycle came round, 25 days of eating normally had returned me almost to my original weight.

But not all consequences of the diet faded so quickly.

"What we are seeing is the maintenance of some of the effects even when normal feeding resumes," explains Dr Valter Longo, director of USC's Longevity institute, who has observed similar results in rodents.

"That was very good news because that's exactly what we were hoping to achieve."

Clinical tests showed that during the diet cycles my systolic blood pressure dropped by about 10%, while the diastolic number remained about the same. For someone who has, at times, had borderline hypertension, this was encouraging. However, after the control period (normal diet), my blood pressure, like my weight, returned to its original - not-so-healthy - state.

The researchers will be looking at whether repeated cycles of the diet could be used to help manage blood pressure in people over the longer term.

Arguably, the most interesting changes were in the levels of a growth hormone known as IGF-1 (insulin-like growth factor). High levels of IGF-1, which is a protein produced by the liver, are believed significantly to increase the risks of colorectal, breast and prostate cancer. Low levels of IGF-1 reduce those risks.

"In animals studies we and others have shown this to be a growth factor that is very much associated with ageing and a variety of diseases, including cancer," says Longo.

Studies in mice have shown that an extreme diet, similar to the one I experienced, causes IGF-1 levels to drop and to stay down for a period after a return to normal eating.

My data showed exactly the same pattern.

"You had a dramatic drop in IGF-1, close to 60% and then once you re-fed it went up, but was still down 20%," Longo told me.

Such a reduction could make a significant difference to an individual's likelihood of developing certain cancers, he says. A study of a small population of people in Ecuador, who have much lower levels of IGF-1, because they lack a growth hormone receptor, showed that they rarely develop cancer and other age-related conditions.

Continue reading the main story

Insulin-like growth factor 1

  • IGF-1 is a protein produced by the liver when it is stimulated by growth hormone circulating in the blood
  • It plays a role in the growth of muscle, bones and cartilage throughout the body, and is critical to growth and development during childhood
  • Lower levels of IGF-1, induced by calorie restriction, have been shown in rodents to slow the ageing process and protect against cancer
  • IGF-1 levels in adult humans vary according to age and gender

My blood tests also revealed that the major inhibitor of IGF-1, which is called IGFBP-1, was significantly up during the fasting period. Even when I resumed a normal diet, the IGFBP-1 level was elevated compared with my baseline. It is, according to Longo, a sign that my body switched into a mode that was much more conducive to healthy ageing.

Data from other participants in the study is still being analysed, but if they also show lower levels of IGF-1 and higher levels of IGFBP-1, it could help scientists develop an intermittent fasting regime that allows people to eat a normal diet for the vast majority of the time, and still slow down the ageing process.

One idea being explored by Longo is that a five-day intervention every 60 days may be enough to trigger positive changes in the body.

"This is exactly what we have in mind to allow people, for let's say 55 every 60 days, to decide what they are going to eat with the help of a good doctor, and diet in the five days. They may not think it is the greatest food they have ever eaten, but it's a lot easier, let's say, than complete fasting and it's a lot safer than complete fasting and it may be more effective than complete fasting."

Continue reading the main story

My levels of IGF-1 during the trial

  • At start: 119ng/ml (nanograms per millilitre)
  • Immediately after five-day fasting diet: 49ng/ml
  • One week after five-day fasting diet: 97ng/ml
  • Normal range for men 51-60 years old is 68-245ng/ml

The very small meals I was given during the five-day fast were far from gourmet cooking, but I was glad to have something to eat. There are advocates of calorie restriction who promote complete fasting.

My blood tests also detected a significant rise in a type of cell, which may play a role in the regeneration of tissues and organs.

It is a controversial area and not fully understood by scientists.

"Your data corresponds to pre-clinical data that we got from animal models that shows that cycles of fasting could elevate this particular substance, considered to be stem cells," said Dr Min Wei, the lead investigator.

The substance has also been referred to, clumsily, as "embryonic-like".

"At least in humans we have a very limited understanding of what they do. In animal studies they are believed to be 'embryonic-like' meaning... they are the type of cells that have the ability to regenerate almost anything," says Longo.

It would be highly beneficial if intermittent fasting could trigger a response that enhances the body's ability to repair itself, but much more research is required to confirm these observations.

This diet is still at the experimental stage and data from the trial are still being studied. Other scientists will eventually scrutinise the findings independently, and may attempt to replicate them.

"We generally like to see not only an initial discovery in a trial but we like to see confirmatory trials to be sure that in the broadest kind of sense, in the general population that these findings are going to be applicable," says Dr Lawrence Piro, a cancer specialist at The Angeles Clinic and Research Institute.

"I do believe fasting to be a very effective mechanism. They are pieces of a puzzle, that puzzle is not fully revealed yet, the picture isn't clear yet but there's enough of the picture clear. I think we can be really excited that there is some substantial truth here, some substantial data coming forward and something that we can really be hopeful about."

Future clinical trials will focus on "at-risk" members of community - those who are obese - to gauge their response to a severely restricted diet.

But if this diet, or another intermittent fasting diet, is eventually proven be effective and sustainable, it could have profound implications for weight loss and the way doctors fight the diseases of old age.

This is the third part of a series. See also: Fasting for science and Sitting out the hunger pangs.

Follow @BBCNewsMagazine on Twitter and on Facebook

On a tablet? Read 10 of the best Magazine stories from 2013 here

Teaching Software Architecture: with GitHub! | Arie van Deursen

$
0
0

Comments:"Teaching Software Architecture: with GitHub! | Arie van Deursen"

URL:http://avandeursen.com/2013/12/30/teaching-software-architecture-with-github/


Arie van Deursen, Alex Nederlof, and Eric Bouwers.

When teaching software architecture it is hard to strike the right balance between practice (learning how to work with real systems and painful trade offs) and theory (general solutions that any architect needs to thoroughly understand).

To address this, we decided to try something radically different at Delft University of Technology:

  • To make the course as realistic as possible, we based the entire course on GitHub. Thus, teams of 3-4 students had to adopt an active open source GitHub project and contribute. And, in the style of “How GitHub uses GitHub to Build GitHub“, all communication within the course, among team members, and with external stakeholders took place through GitHub where possible.
  • Furthermore, to ensure that students actually digested architectural theory, we made them apply the theory to the actual project they picked. As sources we used literature available on the web, as well as the software architecture textbook written by Nick Rozanski and Eoin Woods.

To see if this worked out, here’s what we did (the course contents), how we did it (the course structure, including our use of GitHub), and what most of us liked, or did not like so much (our evaluation).

Course Contents

GitHub Project Selection

At the start of the course, we let teams of 3-4 students adopt a project of choice on GitHub. We placed a few constraints, such as that the project should be sufficiently complex, and that it should provide software that is actually used by others — there should be something at stake.

Based on our own analysis of GitHub, we also provided a preselection of around 50 projects, which contained at least 200 pull requests (active involvement from a community) and an automated test suite (state of the art development practices).

The projects selected by the students included Netty, Jenkins, Neo4j, Diaspora, HornetQ, and jQuery Mobile, to name a few.

Making Contributions

To get the students involved in their projects, we encouraged them to make actual contributions. Several teams were successful in this, and offered pull requests that were actually merged. Examples include an improvement of the Aurelius Titan build process, or a fix for an open issue in HornetQ. Other contributions were in the form of documentation, for example one team created a (popular) screencast on how to build a chat client/server system with Netty in under 15 minutes, posted on Youtube.

The open source projects generally welcomed these contributions, as illustrated by the HornetQ reaction:

Thanks for the contribution… I can find you more if you need to.. thanks a lot! :+1:

And concerning the Netty tutorial:

Great screencast, fantastic project.

Students do not usually get this type of feedback, and were proud to have made an impact. Or, as the CakePHP team put it:

All in all we are very happy with the reactions we got, and we really feel our contribution mattered and was appreciated by the CakePHP community.

To make it easier to establish contact with such projects, we setup a decidcated @delftswa Twitter account. Many of the projects our students analyzed have their own Twitter accounts, and just mentioning a handle like @jenkinsci or @cakephp helped receive the project’s attention and establish the first contact.

Identifying Stakeholders

One of the first assignments was to conduct a stakeholder analysis:
understanding who had an interest in the project, what their interest was, and which possibly conflicting needs existed.

To do this, the students followed the approach to identify and engage stakeholders from Rozanski and Woods. They distinguish various stakeholder classes, and recommend looking for stakeholders who are most affected by architectural decisions, such as those who have to use the system, operate or manage it, develop or test it, or pay for it.

Aurelius Stakeholders (click to enlarge)

To find the stakeholders and their architectural concerns, the student teams analyzed any information they could find on the web about their project. Besides documentation and mailing lists, this also included an analysis of recent issues and pull requests as posted on GitHub, in order to see what the most pressing concerns of the project at the moment are and which stakeholders played a role in these discussions.

Architectural Views

Since Kruchten’s classical “4+1 views” paper, it has been commonly accepted that there is no such thing as the architecture. Instead, a single system will have multiple architectural views.

Following the terminology from Rozanski and Woods, stakeholders have different concerns, and these concerns are addressed through views.
Therefore, with the stakeholder analysis in place, we instructed students to create relevant views, using the viewpoint collection from Rozanski and Woods as starting point.

Rozanski & Woods Viewpoints

Some views the students worked on include:

  • A context view, describing the relationships, dependencies, and interactions between the system and its environment. For example, for Diaspora, the students highlighted the relations with users, `podmins’, other social networks, and the user database.
  • A development view, describing the system for those involved in building, testing, maintaining, and enhancing the system. For example, for neo4j, students explained the packaging structure, the build process, test suite design, and so on.
  • A deployment view, describing the environment into which the system will be deployed, including the dependencies the system has on its runtime environment. For GitLab, for example, the students described the installations on the clients as well as on the server.

Besides these views, students could cover alternative ones taken from other software architecture literature, or they could provide an analysis of the perspectives from Rozanski and Woods, to cover such aspects as security, performance, or maintainability.

Again, students were encouraged not just to produce architectural documentation, but to challenge the usefulness of their ideas by sharing them with the projects they analyzed. This also forced them to focus on what was needed by these projects.

For example, the jQuery Mobile was working on a new version aimed at realizing substantial performance improvements. Therefore, our students conducted a series of performance measurements comparing two versions as part of their assignment. Likewise, for CakePHP, our students found the event system of most interest, and contributed a well-received description of the new CakePHP Events System.

Software Metrics

For an architect, metrics are an important tool to discuss (desired) quality attributes (performance, scaleability, maintainability, …) of a system quantitatevely. To equip students with the necessary skills to use metrics adequately, we included two lectures on software metrics (based on our ICSE 2013 tutorial).

Students were asked to think of what measurements could be relevant for their projects. To that end, they first of all had to think of an architecturally relevant goal, in line with the Goal/Question/Metric paradigm. Goals picked by the students included analyzing maintainability, velocity in responding to bug reports, and improving application responsiveness.

Subsequently, students had to turn their project goal into 3 questions, and identify 2 metrics per question.

Brackets Analyzability

Many teams not just came up with metrics, but also used some scripting to actually conduct measurements. For example, the Brackets team decided to build RequireJS-Analyzer, a tool that crawls JavasScript code connected using RequireJS and analyzes the code, to report on metrics and quality.

Architectural Sketches

Being an architect is not just about views and metrics — it is also about communication. Through some luck, this year we were able to include an important yet often overlooked take on communication: The use of design sketches in software development teams.

We were able to attract Andre van der Hoek (UC Irvine) as guest speaker. To see his work on design sketches in action, have a look at the Calico tool from his group, in which software engineers can easily share design ideas they write at a sketch board.

In one of the first lectures, Andre presented Calico to the students. One of the early assignments was to draw any free format sketches that helped to understand the system under study.

The results were very creative. The students freed themselves from the obligation to draw UML-like diagrams, and used colored pencils and papers to draw what ever they felt like.

For example, the above picture illustrates the HornetQ API; Below you see the Jenkins work flow, including dollars floating from customers the contributing developers.

Lectures From Industry

Last but not least, we wanted students to meet and learn from real software architects. Thus, we invited presentations from the trenches — architects working in industry, sharing their experience.

For example, in 2013 we had TU Delft alumnus Maikel Lobbezoo, who presented the architectural challenges involved in building the payment platform offered by Adyen. The Adyen platform is used to process gazilions of payments per day, acrross the world. Adyen has been one of the fastest growing tech companies in Europe, and its success can be directly attributed to its software architecture.

Maikel explained the importance of having a stateless architecture, that is linearly scalable, redundant and based on push notifications and idempotency. His key message was that the successful architect possesses a (rare) combination of development skills, communicative skills, and strategical bussiness insight.

Our other guest lecture covered cyber-physical systems: addressing the key role software plays in safety-critical infrastructure systems such as road tunnels — a lecture provided by Eric Burgers from Soltegro. Here the core message revolved around traceability to regulations, the use of model-based techniques such as SysML, and the need to communicate with stakeholders with little software engineering experience.

Course Structure

Constraints

Some practical constraints of the course included:

  • The course was worth 5 credit points (ECTS), corresponding to 5 * 27 = 135 hours of work per student.
  • The course run for half a semester, which is 10 consecutive weeks
  • Each week, there was time for two lectures of 90 minutes each
  • We had a total of over 50 students, eventually resulting in 14 groups of 3-4 students.
  • The course was intended four 4th year students (median age 22) who were typically following the TU Delft master in Computer Science, Information Architecture, or Embedded Systems.

GitHub-in-the-Course

All communication within the course took place through GitHub, in the spirit of the inspirational video How GitHub uses GitHub to Build GitHub. As a side effect, this eliminated the need to use the (not so popular) Blackboard system for intra-course communication.

From GitHub, we obtained a (free) delftswa organization. We used it to host various repositories, for example for the actual assignments, for the work delivered by each of the teams, as well as for the reading material.

Each student team obtained one repository for themselves, which they could use to collaboratively work on their assignments. Since each group worked on a different system, we decided to make these repositories visible to the full group: In this way, students could see the work of all groups, and learn from their peers how to tackle a stakeholder analysis or how to assess security issues in a concrete system.

Having the assignment itself on a GitHub repository not only helped us to iteratively release and improve the assignment — it also gave students the possibility to propose changes to the assignment, simply by posting a pull request. Students actually did this, for example to fix typos or broken links, to pose questions, to provide answers, or to start a discussion to change the proposed submission date.

We considered whether we could make all student results public. However, we first of all wanted to provide students with a safe environment, in which failure could be part of the learning process. Hence it was up to the students to decide which parts of their work (if any) they wanted to share with the rest of the world (through blogs, tweets, or open GitHub repositories).

Time Keeping

One of our guiding principles in this course was that an architect is always eager to learn more. Thus, we expected all students to spend the full 135 hours that the course was supposed to take, whatever their starting point in terms of knowledge and experience.

This implies that it is not just the end result that primarily counts, but the progress the students have made (although there is of course, a “minimum” result that should be met at least).

To obtain insight in the learning process, we asked students to keep track of the hours they spent, and to maintain a journal of what they did each week.

For reasons of accountability such time keeping is a good habit for an architect to adopt. This time keeping was not popular with students, however.

Student Presentations

We spent only part of the lecturing time doing class room teaching.
Since communication skills are essential for the successful architect,
we explicitly allocated time for presentations by the students:

  • Half-way all teams presented a 3-minute pitch about their plans and challenges.
  • At the end of the course we organized a series of 15 minute presentations in which students presented their end results.

Each presentation was followed by a Q&A round, with questions coming from the students as well as an expert panel. The feedback concerned the architectural decisions themselves, as well as the presentation style.

Grading

Grading was based per team, and based on the following items:

  • Final report: the main deliverable of each team, providing the relevant architectural documentation created by the team.
  • Series of intermediate (‘weekly’) reports corresponding to dedicated assignments on, e.g, metrics, particular views, or design sketches. These assignments took place in the first half of the course, and formed input for the final report;
  • Team presentations

Furthermore, there was one individual assignment, in which each student had to write a review report for the work of one of the other teams. Students did a remarkably good (being critical as well as constructive) job at this. Besides allowing for individual grading, this also ensured each team received valuable feedback from 3-4 fellow students.

Evaluation

All in all, teaching this course was a wonderful experience. We gave the students considerable freedom, which they used to come up with remarkable results.

A key success factor was peer learning. Groups could learn from each other, and gave feedback to each other. Thus, students not just made an in depth study of the single system they were working on, but also learned about 13 different architectures as presented by the other groups. Likewise, they not just learned about the architectural views and perspectives they needed themselves, but learned from their co-students how they used different views in different ways.

The use of GitHub clearly contributed to this peer learning. Any group could see, study, and learn from the work of any other group. Furthermore, anyone, teachers and students alike, could post issues, give feedback, and propose changes through GitHub’s facilities.

On the negative side, while most students were familiar with git, not all were. The course could be followed using just the bare minimum of git knowledge (add, commit, checkout, push, pull), yet the underlying git magic sometimes resulted in frustration with the students if pull requests could not be merged due to conflicts.

An aspect the students liked least was the time keeping. We are searching for alternative ways of ensuring that time is spent early on in the project, and ways in which we can assess the knowledge gain instead of the mere result.

One of the parameters of a course like this is the actual theory that is discussed. This year, we included views, metrics, sketches, as well as software product lines and service oriented architectures. Topics that were less explicit this year were architectural patterns, or specific perspectives such as security or concurrency. Some of the students indicated that they liked the mix, but that they would have preferred a little more theory.

Conclusion

In this course, we aimed at getting our students ready to take up a leading role in software development projects. To that end, the course put a strong focus on:

  • Close involvement with open source projects where something was at stake, i.e., which were actually used and under active development;
  • The use of peer learning, leveraging GitHub’s social coding facilities for delivering and discussing results
  • A sound theoretical basis, acknowleding that architectural concepts can only be grasped and deeply understood when trying to put them into practice.

Next year, we will certainly follow a similar approach, naturally with some modifications. If you have suggestions for a course like this, or are aware of similar courses, please let us know!

Acknowledgments

A big thank you to all students who participated in IN4315 in 2013,
to jury members Nicolas Dintzner, Felienne Hermans, and Georgios Gousios, and guest lecturers Andre van der Hoek, Eric Burgers, Daniele Romano, and Maikel Lobbezoo for shaping this course!

Further Reading

Zach Holman. How GitHub uses GitHub to Build GitHub. RubyConf, September 2011. Nick Rozanski and Eoin Woods. Software Systems Architecture: Working with Stakeholders Using Viewpoints and Perspectives. Addison-Wesley, 2012, 2nd edition. Diomides Spinellis and Georgios Gousios (editors). Beautiful Architecture: Leading Software Engineers Explain How They Think. O’Reilly Media, 2009 (link). Amy Brown and Greg Wilson (editors). The Architecture of Open Source Applications. Volumes 1-2, 2012. Software Architecture Wikipedia page (thoughtfully and substantially edited by the IFIP Working Group on Software Architecture in 2012) Eric Bouwers. Metric-based Evaluation of Implemented Software Architectures. PhD Thesis, Delft University of Technology, 2013. Remco de Boer, Rik Farenhorst, Hans van Vliet: A Community of Learners Approach to Software Architecture Education. CSEE&T, 2009.

© Text: Arie van Deursen, Alex Nederlof, and Eric Bouwers, 2013.

© Architectural views: students of the TU Delft IN4315 course, 2013.

Like this:

LikeLoading...

Twitter SVP Chris Fry Breaks Down How His Engineering Org Works | Re/code

$
0
0

Comments:"Twitter SVP Chris Fry Breaks Down How His Engineering Org Works | Re/code"

URL:http://recode.net/2014/01/02/twitter-svp-chris-fry-breaks-down-how-his-engineering-org-works/


You could say Chris Fry has a pretty big job.

As Twitter’s senior vice president of engineering, he’s the guy who makes sure the trains — or rather, tweets — run on time. He’s responsible for managing the roughly 1000-plus engineers who make up half the company working on Twitter’s site reliability, new products and many other projects.

Now, as Twitter enters a new era of fewer fail whales and more time spent under Wall Street’s watchful eye, Fry has an even bigger responsibility: Attracting new engineers, holding on to the old ones, and keeping an ever-growing house in order. And as anyone who has worked in a rapidly scaling company will tell you, that’s no easy feat.

We chatted with Fry about Twitter’s engineering and what’s ahead for the org come 2014 (hint: It involves more product innovation). Read the lightly edited excerpts from our discussion below.

Re/code: So what has changed in Twitter’s engineering organization since you’ve been there?

Chris Fry: The interesting thing about Twitter is that it was set up originally around these allotted, independent teams working in isolation. And there’s always a tension between distribution and centralization.

What we had at Salesforce (Fry’s home for seven years before he joined Twitter two years ago) was probably a little too centralized, while what we had at Twitter was a little too distributed.

So the goal was to maintain a lot of the autonomy of the individual teams, but to put a structure in place that allowed everyone to work together seamlessly.

From an organizational standpoint, I think about it like you’re building a school. Because half the people that are coming in don’t know anything about what has to happen, so you’re trying to get them up to speed as fast as possible. I always bring this learning aspect to orgs, not only in their day-to-day jobs, but in getting people up to speed.

How typical is that compared to other tech companies? 

The thing that made Twitter unique was the history of the company and the rapid scale. Very few companies experience what Twitter did. Just keeping the site up was the full-time job of most people in the organization. And when I came in, there was still a lot of work to do.

But from my experience, there’s a lot of similarity in terms of the way software organizations work these days. Everyone [in Silicon Valley] uses a sort of lightweight, agile framework for teams where they’re fairly autonomous.

There was a lot of time in the early 2000s where people were shifting from what would be called a classic waterfall model – design, build, test — to a more lightweight model. And I think most Valley companies take that lean model and apply it.

The trick is figuring out as you scale from one team to 100, how do you make it so that the teams can still have a little bit of structure so that they know what they need to work on, but are still able to rapidly experiment and iterate? That’s a lot of what I work on — to keep giving teams autonomy.

So does that mean bringing in more managers?

Focusing on leaders and managers is key. Some organizations think that you can do away with managers entirely. But I think it’s better to focus on how to make managers super effective. And for new organizations, you often have a lot of people who are promoted out of technical positions into leadership positions — so focusing on that transition, and how you lead sets of teams.

One of the things that definitely attracted me to Twitter was Dick Costolo’s “Leading at Twitter” class. He still teaches a class for every manager entering the organization on his personal philosophy on leadership.

So talk to me about mobility inside of Twitter. If I’m a person working on Web products, can I switch over to Android at some point? I imagine I might get bored doing the same thing at Twitter after awhile.

One of the things I always think about is how to deliver three things to everyone that works for me. One is autonomy, one is mastery and one is purpose.

So at Twitter, the purpose is that you’re building this communication framework that allows anyone in the world to communicate with everyone in the world. Twitter University (launched in 2013) [addresses] the mastery piece: Getting better as a craftsman in the work.

Autonomy is interesting. You want it at the team and individual levels. On the individual level, every quarter, employees can basically go out and, if they find a role inside engineering where the team would like them to join, we’ll let them make that move.

I think of it as creating a free market for talent inside the company. Because if you think of the sort of free-market environment for talent in the Valley that we’re in right now, everybody is recruiting engineers [constantly]. They’ll probably get five emails from other companies every day. So you want to give people inside the company the same advantage by reaching out to people and giving them new opportunities.

Okay, so you’ve ballooned in size over the past two years. Now how do you keep them around for the long haul?

First, it’s about the work. There’s only so much you can do at a company if the work isn’t great. So make sure everyone is working on something that’s important and that they feel passionate about.

The beauty of Twitter is that you really are working on something that fills a unique need in society — this service that connects people.

But what about something like upward mobility? That’s pretty important, too. 

You know, I think one of the more interesting things we do is our promotions. It’s similar to what we did at Salesforce and I think Google does it too.

We do a lot based on peer feedback, and then promotions are evaluated by a team of engineers. So it isn’t like managers pick people and say “these people are promoted.” It’s really a peer-based nomination system rather than management-controlled, and that, I think, has the ability to drive a fair, more transparent system.

And that’s different from, say, what Yahoo or Microsoft were doing with Stack Ranking?

Yes, it’s very different [from] that.

Fair enough. So what’s next for Twitter in 2014?

If you look at the history of Twitter, the first thing to get under control was reliability. Then we needed to make it efficient. That foundation basically sets us up for a period of product innovation.

So if you think of what’s coming next for Twitter, it’s going to be a whole bunch of innovation in how people experience the information that sits at the heart of Twitter.

Look at two projects that came out recently. @EventParrot takes the whole signal of Twitter, figures out what you might need to know about what’s happening in the world and tells you about it, independent of who you’re following. Then there’s @MagicRecs, which offers a very tailored experience of recommendations [like accounts to follow and tweets to pay attention to] based on what’s happening in your network.

This whole setup has made it so that we can really start innovating around both what we show you and when we show it to you, and then making sure that you have a great day-to-day Twitter experience.

Interesting. So those two accounts, and a lot of other things y’all have done lately, have been positioned as “experiments.” Tell me why.

I think if you want to build a learning culture, you have to experiment. We always want teams trying out new ideas, so we really build a lot of infrastructure to allow people to test new ideas. And that’s a huge part of the way Twitter builds new product.

Is that a new approach?

It’s not entirely new, but we have been really focusing on making sure our experimentation is solid, and that we use it to inform everything we’re doing in the product.

37.804372-122.270803

129 Cars | This American Life

Viewing all 9433 articles
Browse latest View live