Quantcast
Channel: Hacker News 50
Viewing all 9433 articles
Browse latest View live

Edge.org

$
0
0

Comments:"Edge.org"

URL:http://www.edge.org/response-detail/25401


The notion of standard deviation has confused hordes of scientists; it is time to retire it from common use and replace it with the more effective one of mean deviation. Standard deviation, STD, should be left to mathematicians, physicists and mathematical statisticians deriving limit theorems. There is no scientific reason to use it in statistical investigations in the age of the computer, as it does more harm than good—particularly with the growing class of people in social science mechanistically applying statistical tools to scientific problems.

Say someone just asked you to measure the "average daily variations" for the temperature of your town (or for the stock price of a company, or the blood pressure of your uncle) over the past five days. The five changes are: (-23, 7, -3, 20, -1). How do you do it?

Do you take every observation: square it, average the total, then take the square root? Or do you remove the sign and calculate the average? For there are serious differences between the two methods. The first produces an average of 15.7, the second 10.8. The first is technically called the root mean square deviation. The second is the mean absolute deviation, MAD. It corresponds to "real life" much better than the first—and to reality. In fact, whenever people make decisions after being supplied with the standard deviation number, they act as if it were the expected mean deviation.

It is all due to a historical accident: in 1893, the great Karl Pearson introduced the term "standard deviation" for what had been known as "root mean square error". The confusion started then: people thought it meant mean deviation. The idea stuck: every time a newspaper has attempted to clarify the concept of market "volatility", it defined it verbally as mean deviation yet produced the numerical measure of the (higher) standard deviation.

But it is not just journalists who fall for the mistake: I recall seeing official documents from the department of commerce and the Federal Reserve partaking of the conflation, even regulators in statements on market volatility. What is worse, Goldstein and I found that a high number of data scientists (many with PhDs) also get confused in real life.

It all comes from bad terminology for something non-intuitive. By a psychological bias Danny Kahneman calls attribute substitution, some people mistake MAD for STD because the former is easier to come to mind.

1) MAD is more accurate in sample measurements, and less volatile than STD since it is a natural weight whereas standard deviation uses the observation itself as its own weight, imparting large weights to large observations, thus overweighing tail events.

2) We often use STD in equations but really end up reconverting it within the process into MAD (say in finance, for option pricing). In the Gaussian world, STD is about ~1.25 time MAD, that is, the square root of (Pi/2). But we adjust with stochastic volatility where STD is often as high as 1.6 times MAD.

3) Many statistical phenomena and processes have "infinite variance" (sa the popular Pareto 80/20 rule) but have finite, and very well behaved, mean deviations. Whenever the mean exists, MAD exists. The reverse (infinite MAD and finite STD) is never true.

4) Many economists have dismissed "infinite variance" models thinking these meant "infinite mean deviation". Sad, but true. When the great Benoit Mandelbrot proposed his infinite variance models fifty years ago, economists freaked out because of the conflation.

It is sad that such a minor point can lead to so much confusion: our scientific tools are way too far ahead of our casual intuitions, which starts to be a problem with science. So I close with a statement by Sir Ronald A. Fisher: 'The statistician cannot evade the responsibility for understanding the process he applies or recommends.' 

And the probability-related problems with social and biological science do not stop there: it has bigger problems with researchers using statistical notions out of a can without understanding them and babbling "n of 1" or "n large", or "this is anecdotal" (for a large Black Swan style deviation), mistaking anecdotes for information and information for anecdote. It was shown that the majority use regression in their papers in "prestigious" journals without quite knowing what it means, and what claims can—and cannot—be made from it. Because of little check from reality and lack of skin-in-the-game, coupled with a fake layer of sophistication, social scientists can make elementary mistakes with probability yet continue to thrive professionally.


FarmLogs Raises $4M Series A To Further Advance Farming Into The Age of Apps | TechCrunch

$
0
0

Comments:"FarmLogs Raises $4M Series A To Further Advance Farming Into The Age of Apps | TechCrunch"

URL:http://techcrunch.com/2014/01/15/farmlogs-raises-4m-series-a-to-bring-farming-into-the-age-of-apps/


Just a year after securing $1 million in seed funding, Michigan-based FarmLogs is announcing a $4 million Series A led by Drive Capital. The company says it is looking forward to a big 2014 and the co-founder and CEO tells me the company will use the influx of cash to execute on an aggressive product growth plan for the upcoming year.

Jesse Vollmar, CEO and co-founder of FarmLogs, explained to me that the company is building out its product to intelligently predict and optimize crop rotations as well as automate activity data collection. FarmLogs is also looking to ingest data collected by modern farming equipment that he tells me traditionally is rarely exported. By using low-cost Bluetooth hardware, the company expects to be able to analyse and upload this data in real time.

The Y Combinator alum touts the fact that 5% of farms in the US. are currently using its software. It’s an impressive stat considering the startup just graduated from YC in early 2012.

Drive Capital led the funding with existing investors Huron River Ventures, Hyde Park Venture Partners and Hyde Park Angels, also participating in the investment.

“We are very excited about the trajectory we are on and having additional support and resources will continue to accelerate our growth,” said Jesse Vollmar in a released statement. “We’ve helped thousands of farmers around the world take advantage of technology, and with their feedback and suggestions they’ve helped us create a smarter future for farming.”

FarmLogs’ data-driven approach to farming leans on the mobile web to help crop farmers quickly and efficiently forecast profits, track expenses and more efficiently schedule operations. Best yet, the software utilizes GPS for additional data about any given location’s historical weather data. Farms can quickly jot down notes and input data using the software’s mobile app. It’s a radical revolution for the age-old industry.

FarmLogs launched to an industry ripe for modernization. The incumbent software requires traditional Wintel computers. FarmLogs requires just an iPad, which although admittedly the extent of my “farming” consists of rebuilding a tiller a few years back, seems like a much friendlier device to have in a tractor than a laptop.

Volatility Labs: TrueCrypt Master Key Extraction And Volume Identification

$
0
0

Comments:"Volatility Labs: TrueCrypt Master Key Extraction And Volume Identification"

URL:http://volatility-labs.blogspot.com/2014/01/truecrypt-master-key-extraction-and.html


One of the disclosed pitfalls of TrueCrypt disk encryption is that the master keys must remain in RAM in order to provide fully transparent encryption. In other words, if master keys were allowed to be flushed to disk, the design would suffer in terms of security (writing plain-text keys to more permanent storage) and performance. This is a risk that suspects have to live with, and one that law enforcement and government investigators can capitalize on.

The default encryption scheme is AES in XTS mode. In XTS mode, primary and secondary 256-bit keys are concatenated together to form one 512-bit (64 bytes) master key. An advantage you gain right off the bat is that patterns in AES keys can be distinguished from other seemingly random blocks of data. This is how tools like aeskeyfind and bulk_extractor locate the keys in memory dumps, packet captures, etc. In most cases, extracting the keys from RAM is as easy as this:

$ ./aeskeyfind Win8SP0x86.raw
f12bffe602366806d453b3b290f89429
e6f5e6511496b3db550cc4a00a4bdb1b
4d81111573a789169fce790f4f13a7bd
a2cde593dd1023d89851049b8474b9a0
269493cfc103ee4ac7cb4dea937abb9b
4d81111573a789169fce790f4f13a7bd
4d81111573a789169fce790f4f13a7bd
269493cfc103ee4ac7cb4dea937abb9b
4d81111573a789169fce790f4f13a7bd
0f2eb916e673c76b359a932ef2b81a4b
7a9df9a5589f1d85fb2dfc62471764ef47d00f35890f1884d87c3a10d9eb5bf4
e786793c9da3574f63965803a909b8ef40b140b43be062850d5bb95d75273e41
Keyfind progress: 100%

Several keys were identified, but only the two final ones in red are 256-bits (the others are 128-bit keys). Thus, you can bet by combining the two 256-bit keys, you'll have your 512-bit master AES key. That's all pretty straightforward and has been documented in quite a few places - one of my favorites being Michael Weissbacher's blog
The problem is - what if suspects change the default AES encryption scheme? TrueCrypt also supports Twofish, Serpent, and combinations thereof (AES-Twofish, AES-Twofish-Serpent). Furthermore, it supports modes other than XTS, such as LWR, CBC, outer CBC, and Inner CBC (though many of the CBCs are either deprecated or not recommended).

What do you do if a suspect uses non-default encryption schemes or modes? You can't find Twofish or Serpent keys with tools designed to scan for AES keys -- that just doesn't work. As pointed out by one of our Twitter followers (@brnocrist), a tool by Carsten Maartmann-Moe named Interrogate could be of use here (as could several commercial implementations from Elcomsoft or Passware). 


Another challenge that investigators face, in the case of file-based containers, is figuring out which file on the suspect's hard disk serves as the container. If you don't know that, then having the master keys is only as useful as finding the key to a house but having no idea where the house is. 

To address these issues, I wrote several new Volatility plugins. The truecryptsummary plugin gives you a detailed description of all TrueCrypt related artifacts in a given memory dump. Here's how it appears on a test system running 64-bit Windows 2012

$ python vol.py -f WIN-QBTA4959AO9.raw --profile=Win2012SP0x64 truecryptsummary

Volatility Foundation Volatility Framework 2.3.1 (T)

Process              TrueCrypt.exe at 0xfffffa801af43980 pid 2096

Kernel Module        truecrypt.sys at 0xfffff88009200000 - 0xfffff88009241000

Symbolic Link        Volume{52b24c47-eb79-11e2-93eb-000c29e29398} -> \Device\TrueCryptVolumeZ mounted 2013-10-11 03:51:08 UTC+0000

Symbolic Link        Volume{52b24c50-eb79-11e2-93eb-000c29e29398} -> \Device\TrueCryptVolumeR mounted 2013-10-11 03:55:13 UTC+0000

File Object          \Device\TrueCryptVolumeR\$Directory at 0x7c2f7070

File Object          \Device\TrueCryptVolumeR\$LogFile at 0x7c39d750

File Object          \Device\TrueCryptVolumeR\$MftMirr at 0x7c67cd40

File Object          \Device\TrueCryptVolumeR\$Mft at 0x7cf05230

File Object          \Device\TrueCryptVolumeR\$Directory at 0x7cf50330

File Object          \Device\TrueCryptVolumeR\$BitMap at 0x7cfa7a00

File Object          \Device\TrueCryptVolumeR\Chats\Logs\bertha.xml at 0x7cdf4a00

Driver               \Driver\truecrypt at 0x7c9c0530 range 0xfffff88009200000 - 0xfffff88009241000

Device               TrueCryptVolumeR at 0xfffffa801b4be080 type FILE_DEVICE_DISK

Container            Path: \Device\Harddisk1\Partition1

Device               TrueCrypt at 0xfffffa801ae3f500 type FILE_DEVICE_UNKNOWN

Among other things, you can see that the TrueCrypt volume was mounted on the suspect system on October 11th 2013. Furthermore, the path to the container is \Device\Harddisk1\Partition1, because in this case, the container was an entire partition (a USB thumb drive). If we were dealing with a file-based container as previously mentioned, the output would show the full path on disk to the file.

Perhaps even more exciting than all that is the fact that, despite the partition being fully encrypted, once its mounted, any files accessed on the volume become cached by the Windows Cache Manager per normal -- which means the dumpfiles plugin can help you recover them in plain text. Yes, this includes the $Mft, $MftMirr, $Directory, and other NTFS meta-data files, which are decrypted immediately when mounting the volume. In fact, even if values that lead us to the master keys are swapped to disk, or if TrueCrypt (or other disk encryption suites like PGP or BitLocker) begin using algorithms without predictable/detectable keys, you can still recover all or part of any files accessed while the volume was mounted based on the fact that the Windows OS itself will cache the file contents (remember, the encryption is transparent to the OS, so it caches files from encrypted volumes in the same way as it always does). 

After running a plugin such as truecryptsummary, you should have no doubts as to whether TrueCrypt was installed and in use, and which files or partitions are your targets. You can then run the truecryptmaster plugin which performs nothing short of magic. 

$ python vol.py -f WIN-QBTA4.raw --profile=Win2012SP0x64 truecryptmaster -D . 

Volatility Foundation Volatility Framework 2.3.1 (T)

Container: \Device\Harddisk1\Partition1

Hidden Volume: No

Read Only: No

Disk Length: 7743733760 (bytes)

Host Length: 7743995904 (bytes)

Encryption Algorithm: SERPENT

Mode: XTS

Master Key

0xfffffa8018eb71a8 bbe1dc7a8e87e9f1f7eef37e6bb30a25   ...z.......~k..%

0xfffffa8018eb71b8 90b8948fefee425e5105054e3258b1a7   ......B^Q..N2X..

0xfffffa8018eb71c8 a76c5e96d67892335008a8c60d09fb69   .l^..x.3P......i

0xfffffa8018eb71d8 efb0b5fc759d44ec8c057fbc94ec3cc9   ....u.D.......<.

Dumped 64 bytes to ./0xfffffa8018eb71a8_master.key

You now have a 512-byte Serpent master key, which you can use to decrypt the roughly 8 GB USB drive. It tells you the encryption mode that the suspect used, the full path to the file or container, and some additional properties such as whether the volume is read-only or hidden. As you may suspect, the plugin works regardless of the encryption algorithm, mode, key length, and various other factors which may complicate the procedure of finding keys. This is because it doesn't rely on the key or key schedule patterns -- it finds them in the exact same way the TrueCrypt driver itself finds the keys in RAM before it needs to encrypt or decrypt a block of data. 

The truecryptsummary plugin supports all versions of TrueCrypt since 3.1a (released 2005) and truecryptmaster supports 6.3a (2009) and later. In one of the more exciting hands-on labs in our memory forensics training class, students experiment with these plugins and learn how to make suspects wish there was no such thing as Volatility. 

UPDATE 1/15/2014: In our opinion, what's described here is not a vulnerability in TrueCrypt (that was the reason we linked to their FAQ in the first sentence). We don't intend to cause mass paranoia or discourage readers from using the TrueCrypt software. Our best advice to people seeking to keep data secure and private is to read the TrueCrypt documentation carefully, so you're aware of the risks. As stated in the comments to this post, powering your computer off is probably the best way to clear the master keys from RAM. However, you don't always get that opportunity (the FBI doesn't call in advance before kicking in doors) and there's also the possibility of cold boot attacks even if you do shut down.

-Michael Ligh (@iMHLv2)

swannodette/lt-cljs-tutorial · GitHub

Manifesto for Half-Arsed Agile Software Development

operating systems - Are passwords stored in memory safe? - Information Security Stack Exchange

$
0
0

Comments:"operating systems - Are passwords stored in memory safe? - Information Security Stack Exchange"

URL:http://security.stackexchange.com/questions/29019/are-passwords-stored-in-memory-safe


You are touching a sore point...

Historically, computers were mainframes where a lot of distinct users launched sessions and process on the same physical machine. Unix-like systems (e.g. Linux), but also VMS and its relatives (and this family includes all Windows of the NT line, hence 2000, XP, Vista, 7, 8...), have been structured in order to support the mainframe model.

Thus, the hardware provides privilege levels. A central piece of the operating system is the kernel which runs at the highest privilege level (yes, I know there are subtleties with regards to virtualization) and manages the privilege levels. Applications run at a lower level and are forcibly prevented by the kernel from reading or writing each other's memory. Applications obtain RAM by pages (typically 4 or 8 kB) from the kernel. An application which tries to access a page belonging to another application is blocked by the kernel, and severely punished ("segmentation fault", "general protection fault"...).

When an application no longer needs a page (in particular when the application exits), the kernel takes control of the page and may give it to another process. Modern operating systems "blank" pages before giving them back, where "blanking" means "filling with zeros". This prevents leaking data from one process to another. Note that Windows 95/98/Millenium did not blank pages, and leaks could occur... but these operating system were meant for a single user per machine.

Of course, there are ways to escape the wrath of the kernel: a few doorways are available to applications which have "enough privilege" (not the same kind of privileges than above). On a Linux system, this is ptrace(). The kernel allows one process to read and write the memory of the other, through ptrace(), provided that both processes run under the same user ID, or that the process which does the ptrace() is a "root" process. Similar functionality exists in Windows.

The bottom-line is that passwords in RAM are no safer than what the operating system allows. By definition, by storing some confidential data in the memory of a process, you are trusting the operating system for not giving it away to third parties. The OS is your friend, because if the OS is an enemy then you have utterly lost.

Now comes the fun part. Since the OS enforces a separation of process, many people have tried to find ways to pierce these defenses. And they found a few interesting things...

  • The "RAM" which the applications see is not necessarily true "memory". The kernel is a master of illusions, and gives pages that do not necessarily exist. The illusion is maintained by swapping RAM contents with a dedicated space on the disk, where free space is present in larger quantities; this is called virtual memory. Applications need not be aware of it, because the kernel will bring back the pages when needed (but, of course, disk is much slower than RAM). An unfortunate consequence is that some data, purportedly held in RAM, makes it to a physical medium where it will stay until overwritten. In particular, it will stay there if the power is cut. This allows for attacks where the bad guy grabs the machine and runs away with it, to inspect the data later on. Or leakage can occur when a machine is decommissioned and sold on eBay, and the sysadmin forgot to wipe out the disk contents.

    Linux provides a system called called mlock() which prevents the kernel from sending some specific pages to the swap space. Since locking pages in RAM can deplete available RAM resources for other process, you need some privileges (root again) to use this function.

    An aggravating circumstance is that it is not necessarily easy to keep track of where your password is really in RAM. As a programmer, you access RAM through the abstraction provided by the programming language. In particular, programming languages which use Garbage Collection may transparently copy objects in RAM (because it really helps for many GC algorithms). Most programming languages are thus impacted (e.g. Java, C#/.NET, Javascript, PHP,... the list is almost endless).

  • Hibernation brings back the same issues, with a vengeance. By nature, hibernation must write the whole RAM to the disk -- this includes pages which were mlocked, and even the contents of the CPU registers. To avoid leaks through hibernation, you have to resort to drastic measures like encrypting the whole disk -- this naturally implies typing the unlock password whenever you awake the machine.

  • The mainframe model assumes that it can run several process which are hostile to each other, and yet maintain perfect peace and isolation. Modern hardware makes that very difficult. When two process run on the same CPU, they share some resources, including cache memory; memory accesses are much faster in the cache than elsewhere, but cache size is very limited. This has been exploited to recover cryptographic keys used by one process, from another. Variants have been developed which use other cache-like resources, e.g. branch prediction in a CPU. While research on that subject concentrates on cryptographic keys, which are high-value secrets, it could really apply to just any data.

    On a similar note, video cards can do Direct Memory Access. Whether DMA cannot be abused to read or write memory from other process depends on how well undocumented hardware, closed-source drivers and kernels collaborate to enforce the appropriate access controls. I would not bet my last shirt on it...

Conclusion: yes, when you store a password in RAM, you are trusting the OS for keeping that confidential. Yes, the task is hard, even nigh impossible on modern systems. If some data is highly confidential, you really should not use the mainframe model, and not allow potentially hostile entities to run their code on your machine.

(Which, by the way, means that hosted virtual machines and cloud computing cannot be ultimately safe. If you are serious about security, use dedicated hardware.)

Bitrot and atomic COWs: Inside “next-gen” filesystems | Ars Technica

$
0
0

Comments:"Bitrot and atomic COWs: Inside “next-gen” filesystems | Ars Technica"

URL:http://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows-inside-next-gen-filesystems/


Most people don't care much about their filesystems. But at the end of the day, the filesystem is probably the single most important part of an operating system. A kernel bug might mean the loss of whatever you're working on right now, but a filesystem bug could wipe out everything you've ever done... and it could do so in ways most people never imagine.

Sound too theoretical to make you care about filesystems? Let's talk about "bitrot," the silent corruption of data on disk or tape. One at a time, year by year, a random bit here or there gets flipped. If you have a malfunctioning drive or controller—or a loose/faulty cable—a lot of bits might get flipped. Bitrot is a real thing, and it affects you more than you probably realize. The JPEG that ended in blocky weirdness halfway down? Bitrot. The MP3 that startled you with a violent CHIRP!, and you wondered if it had always done that? No, it probably hadn't—blame bitrot. The video with a bright green block in one corner followed by several seconds of weird rainbowy blocky stuff before it cleared up again? Bitrot.

The worst thing is that backups won't save you from bitrot. The next backup will cheerfully back up the corrupted data, replacing your last good backup with the bad one. Before long, you'll have rotated through all of your backups (if you even have multiple backups), and the uncorrupted original is now gone for good.

Contrary to popular belief, conventional RAID won't help with bitrot, either. "But my raid5 array has parity and can reconstruct the missing data!" you might say. That only works if a drive completely and cleanly fails. If the drive instead starts spewing corrupted data, the array may or may not notice the corruption (most arrays don't check parity by default on every read). Even if it does notice... all the array knows is that something in the stripe is bad; it has no way of knowing which drive returned bad data—and therefore which one to rebuild from parity (or whether the parity block itself was corrupt).

What might save your data, however, is a "next-gen" filesystem.

Let's look at a graphic demonstration. Here's a picture of my son Finn that I like to call "Genesis of a Supervillain." I like this picture a lot, and I'd hate to lose it, which is why I store it on a next-gen filesystem with redundancy. But what if I didn't do that?

As a test, I set up a virtual machine with six drives. One has the operating system on it, two are configured as a simple btrfs-raid1 mirror, and the remaining three are set up as a conventional raid5. I saved Finn's picture on both the btrfs-raid1 mirror and the conventional raid5 array, and then I took the whole system offline and flipped a single bit—yes, just a single bit from 0 to 1—in the JPG file saved on each array. Here's the result:

Corrupted image: btrfs-raid1.

The raid5 array didn't notice or didn't care about the flipped bit in Finn's picture any more than a standard single disk would. The next-gen btrfs-raid1 system, however, immediately caught and corrected the problem. The results are pretty obvious. If you care about your data, you want a next-gen filesystem. Here, we'll examine two: the older ZFS and the more recent btrfs.

What is a “next-generation” filesystem, anyway?

"Next-generation" is a phrase that gets handed out like sales flyers in a mall parking lot. But in this case, it actually means something. I define a "generation" of filesystems as a group that uses a particular "killer feature"—or closely related set of them—that earlier filesystems don't but that later filesystems all do. Let's take a quick trip down memory lane and examine past and current generations:

Generation 0: No system at all. There was just an arbitrary stream of data. Think punchcards, data on audiocassette, Atari 2600 ROM carts. Generation 1: Early random access. Here, there are multiple named files on one device with no folders or other metadata. Think Apple ][ DOS (but not ProDOS!) as one example. Generation 2: Early organization (aka folders). When devices became capable of holding hundreds of files, better organization became necessary. We're referring to TRS-DOS, Apple //c ProDOS, MS-DOS FAT/FAT32, etc. Generation 3: Metadata—ownership, permissions, etc. As the user count on machines grew higher, the ability to restrict and control access became necessary. This includes AT&T UNIX, Netware, early NTFS, etc. Generation 4: Journaling! This is the killer feature defining all current, modern filesystems—ext4, modern NTFS, UFS2, you name it. Journaling keeps the filesystem from becoming inconsistent in the event of a crash, making it much less likely that you'll lose data, or even an entire disk, when the power goes off or the kernel crashes.

So if you accept my definition, "next-generation" currently means "fifth generation." It's defined by an entire set of features: built-in volume management, per-block checksumming, self-healing redundant arrays, atomic COW snapshots, asynchronous replication, and far-future scalability.

That's quite a laundry list, and one or two individual features from it have shown up in some "current-gen" systems (Windows has Volume Shadow Copy to correspond with snapshots, for example). But there's a strong case to be made for the entire list defining the next generation.

Justify your generation

The quickest objection you could make to defining "generations" like this would be to point at NTFS' Volume Snapshot Service (VSS) or at the Linux Logical Volume Manager (LVM), each of which can take snapshots of filesystems mounted beneath them. However, these snapshots can't be replicated incrementally, meaning that backing up 1TB of data requires groveling over 1TB of data every time you do it. (FreeBSD's UFS2 also offered limited snapshot capability.) Worse yet, you generally can't replicate them as snapshotswith references intact—which means that your remote storage requirements increase exponentially, and the difficulty of managing backups does as well. With ZFS or btrfs replicated snapshots, you can have a single, immediately browsable, fully functional filesystem with 1,000+ versions of the filesystem available simultaneously. Using VSS with Windows Backup, you must use VHD files as a target. Among other limitations, VHD files are only supported up to 2TiB in size, making them useless for even a single backup of a large disk or array. They must also be mounted with special tools not available on all versions of Windows, which goes even further to limit them as tools for specialists only.

Finally, Microsoft's VSS depends on "writer" components that interface with applications (such as MS SQL Server) which can themselves hang up, making it difficult or impossible to successfully create a VSS snapshot in some cases. To be fair, when working properly, VSS writers offer something that simple snapshots don't—application-level consistency. But VSS writer bugs are a real problem, and I've encountered lots of Windows Servers that absolutely could not get shadow copies created properly without wiping and reloading them. I have yet to encounter a ZFS or btrfs filesystem or array that won't immediately create a valid snapshot.

At the end of the day, both LVM and VSS offer useful features that a lot of sysadmins do use, but they don't jump right out and demand your attention the way filenames, folders, metadata, or journaling did when they came onto the market. Still, this is only one feature out of the entire laundry list. You could make a case that snapshots made the fifth generation, and the other features in ZFS and btrfs make the sixth. But by the time you finish this article, you'll see there's no way to argue that btrfs and ZFS definitely constitute a new generation that is easily distinguishable from everything before them.

US physically hacks 100,000 foreign computers - Americas - Al Jazeera English

$
0
0

Comments:" US physically hacks 100,000 foreign computers - Americas - Al Jazeera English "

URL:http://www.aljazeera.com/news/americas/2014/01/us-physically-hacks-100000-foreign-computers-20141154313871671.html


The United States has implanted devices in nearly 100,000 computers to spy on institutions such as the Chinese and Russian military, EU trade groups and agencies within Saudi Arabia, India and Pakistan.

The New York Times on Tuesday cited documents from the National Security Agency, computer experts and US officials stating that the NSA uses radio wave technology to gain access to otherwise encrypted computers or machines that are not connected to the internet.

The Times reported that the agency has been inserting tiny circuit boards into computers for several years. The technology allows non-internet connected computers to be hacked, and bypasses encryption and anti-spyware systems that otherwise prevent hacking over the world wide web.

The NSA calls the effort an "active defence"' and has used the technology to monitor units of the Chinese and Russian armies, drug cartels, trade institutions inside the European Union, and US allies including Saudi Arabia, India and Pakistan, the Times reported.

Among the most frequent targets has been the Chinese army, the Times reported.

The United States has accused the Chinese Army of launching regular attacks on American industrial and military targets, often to steal secrets or intellectual property.

US officials have protested when Chinese attackers have placed similar software on computer systems of US companies or government agencies.

The NSA said the technology has not been used in computers in the US. 

"NSA's activities are focused and specifically deployed against - and only against - valid foreign intelligence targets in response to intelligence requirements,'' Vanee Vines, an agency spokeswoman, said in a statement to the Times.

"We do not use foreign intelligence capabilities to steal the trade secrets of foreign companies on behalf of - or give intelligence we collect to - US companies to enhance their international competitiveness or increase their bottom line.''

279


Complexity of FreeBSD VFS using ZFS as an example. Part 1. | HybridCluster

$
0
0

Comments:"Complexity of FreeBSD VFS using ZFS as an example. Part 1. | HybridCluster"

URL:http://www.hybridcluster.com/blog/complexity-freebsd-vfs-using-zfs-example-part-1-2/


This entry was posted on January 14, 2014 by Andriy Gapon. Leave a reply

I spend a lot of time hacking on the ZFS port to FreeBSD and fixing various bugs. Quite often the bugs are specific to the port and not to the OpenZFS core. A good share of those bugs are caused by differences between VFS models in Solaris and its descendants like illumos, and FreeBSD. I would like to talk about those differences.

But first a few words about VFS in general. VFS stands for “virtual file system”. It is an interface that all concrete filesystem drivers must implement so that higher level code could be agnostic of any implementation details. More strictly, VFS is a contract between an operating system and a filesystem driver.

In a wider sense VFS also includes the higher level filesystem-independent code that provides the more high level and convenient interfaces to the consumers. For example, a filesystem must implement an interface for looking up an entry by name in a directory. VFS provides a more convenient interface that allows to perform a lookup using an absolute or a relative path given a starting directory. Additionally, VFS in a wider sense includes utility code that could be shared between different filesystem drivers.

VFS Overview

Common VFS models for UNIX and UNIX-like operating systems have some common requirements on a structure of a filesystem. First, it is assumed that there are special filesystem objects called directories that provide mapping from names to other filesystem objects that are called directory entries. All other filesystem objects contain data or provide other utilities. The directories form a directed rooted tree starting with a specially designated root directory. In other words, it’s a connected rooted directed acyclic graph where each edge has a name associated with it. Non-directory objects may be reachable by multiple paths. Alternatively, a non-directory object can be a directory entry in more than one directory or it can appear as multiple entries with different names in a single directory. Conventionally those multiple paths to a single object are referred to as hard links. Additionally, there is a designated object type called a symbolic link that can contain a relative or an absolute path. When traversing such a symbolic link object a filesystem consumer may jump to the contained path called a symbolic link destination. It is not required to so, however. The symbolic links allow to create appearence of arbitrary topologies including loops or broken paths that lead nowhere.

A directory must always contain two special entries:

  • “.” (dot) refers to the directory itself
  • “..” (dot dot) refers to a parent directory of the directory or to itself for the root directory

Each filesystem object is customarily referred to as an inode, especially in the context of a filesystem driver implementation. VFS requires that each filesystem object must have a unique integer identifier referred to as an inode number.

At the VFS API layer the inodes are represented as vnodes where ‘v’ stands for virtual. In object oriented terms the vnodes can be thought of as interfaces or abstract base classes for the inodes of the concrete filesystems. The vnode interface has abstract methods known as vnode operations or VOPs that dispatch calls to concrete implementations.

Typically an OS kernel is implemented in C, so object oriented facilities have to be emulated. In particular, a one-to-one relation between a vnode and an inode is established via pointers rather than by using an is-a relationship. For example, here is how a method for creating a new directory looks in FreeBSD VFS:

 int
 VOP_MKDIR(struct vnode *dvp, struct vnode **vpp,
 struct componentname *cnp, struct vattr *vap);

dvp (“directory vnode pointer”) is a vnode that represents an existing directory; the method would be dispatched to an implementation associated with this vnode. If the call is successful, then vpp (“vnode pointer to pointer”) would point to a vnode representing a newly created directory. cnp defines a name for the new directory and vap various attributes of it. The same method in Solaris VFS has a few additional parameters, but otherwise it is equivalent to the FreeBSD VFS one.

It would be wasteful or even plain impossible to have vnode objects in memory for every filesystem object that could potentially be accessed, so vnodes are created upon access and destroyed when they are no longer needed. Given that C does not provide any sort of smart pointers the vnode life cycle must be maintained explicitly. Since in modern operating systems multiple threads may concurrently access a filesystem, and potentially the same vnodes, the lifecycle must be controlled by a reference count. All VFS calls that produce a vnode such as lookups or new object creation return the vnode referenced. Once a caller is done using the vnode it must explicitly drop a reference. When the reference count goes to zero the concrete filesystem is notified about that and should take an appropriate action. In Solaris VFS model the concrete filesystem must free both its implementation specific object and the vnode. In FreeBSD VFS the filesystem must handle its private implementation object, but the vnode is handled by the VFS code.

In practice an application may perform multiple accesses to a file without having any persistent handle open for it. For example, the application may call access(2), stat(2), etc system calls. Also, for example, lookups by different applications may frequently traverse the same directories. As a result, it would be inefficient to destroy a vnode and its associated inode as soon as its use count reaches zero. All VFS implementations cache vnodes to avoid the expense of their frequent destruction and construction. Also, VFS implementations tend to cache path to vnode relationships to avoid the expense of looking up a directory entry via a call to a filesystem driver, VOP_LOOKUP.

Obviously, there can be different strategies for maintaining the caches. For example, a life time of a cache entry could be limited; or total size of the cache could be limited and any excess entries could be purged in a least recently used fashion or in a least frequently used fashion. And so on.

Solaris VFS combines the name cache and the vnode cache. The name cache maintains an extra reference on a vnode and so it is not recycled as long as it is present in the name cache. The advantage of this cache unification is simplicity. The disadvantage is that the two caching modes are coupled. If a filesystem driver for whatever reason would want all lookups to always go through it and didn’t use the name cache, then there would not be any vnode caching for it as well.

As soon as the vnode reference count goes to zero Solaris VFS invokes VOP_INACTIVE method which instructs the filesystem driver to free the vnode and all internal resources associated with it. Theoretically, the filesystem driver could have its internal cache of vnodes but that does not seem to happen in practice.

FreeBSD VFS maintains separate caches and as a result it has a more complex vnode lifecycle. First, FreeBSD VFS maintains two separate reference counts on a vnode. One is called a use count and is used to denote active uses of the vnode such as by a system call in progress. The other is called a hold count and it denotes “passive” uses of the vnode, which means that a user wants a guarantee that its vnode pointer stays valid (e.g. it would not point to freed memory), but the user is not going to perform any operations on the vnode. The hold count is used, for instance, by FreeBSD VFS name cache, but there are other uses. vnode usage implies vnode hold, so every time the use count is increased or decreased, the same is done to the hold count. As a result the hold count is always greater or equal to the use count. When the use count reaches zero FreeBSD VFS invokes VOP_INACTIVE method, but it has a radically different meaning from VOP_INACTIVE in Solaris VFS. This is just a chance for the filesystem driver to perform some maintenance on a vnode, but the vnode must stay fully valid and thus its associated inode must stay valid. An example of the maintenance is removing a file that was unlinked from a filesystem namespace but was still open by one or more application. A vnode with zero use count is considered to be in an inactive state. Conversely, a vnode with non-zero use count is said to be in an active state.

When the hold count reaches zero the vnode is not immediately freed, but is transitioned to a so called free state. In that state the vnode stays fully valid, but is subject to being freed at any time unless it is used again. The free vnodes are placed on a so called free list, which is in essence the vnode cache.

FreeBSD VFS has configurable targets for a total number of vnodes and vnodes in the free state. When the targets are exceeded the free vnodes get reclaimed. This is done by invoking VOP_RECLAIM method. The filesystem driver must free all its internal resources associated with a reclaimed vnode. The reclaimed vnode is marked with a special DOOMED flag. That flag is an indication that the vnode is invalid in the sense that it is not associated with any real filesystem object. Any operations on such a vnode return an error or in some cases lead to a system crash. Thus, we have another vnode state that can be called doomed or less dramatically reclaimed.

While being reclaimed the vnode must be held (its hold count greater than zero) at least by the code that initiates reclamation. Once all holds are released, the vnode that is DOOMED and that has zero hold count is really destroyed.

In FreeBSD VFS the hold count does not guarantee that the vnode remains valid. If the total vnode count exceeds the target and there are not enough free vnodes to meet the target, then inactive vnodes (zero use count, non zero hold count) can be reclaimed as well. As described above, the reclaimed vnode will stay around as long as there are any holds on it. That guarantees that dereferencing a vnode pointer is safe, but does not guarantee safety of any operations on the vnode. A vnode holder must check vnode state (often implicitly) before actually using it. In this sense a hold should be considered as a weak reference.

The complexity of FreeBSD vnode lifecycle management obviously needs a justification. Continuing the last sentence of the previous paragraph it would be tempting to declare the FreeBSD use count and the Solaris reference count to be a strong vnode reference. Not quite so…

And now it is time to introduce the first real world complexity that any VFS (that supports a feature) must deal with — forced unmounting. In some situations it is desirable to unmount a filesystem even though it is still in active use. This means that there are active vnodes that have non-zero use / reference count that are going to end up in an explicit or implicit doomed state because the actual filesystem objects will no longer be accessible.

In FreeBSD this is handled by “forcefully” reclaiming the active vnodes and transitioning them to the explicit doomed state. Solaris VFS does not have a state like that on the VFS level, so this state must be implemented in each concrete filesystem. Since the vnode will still appear as a valid vnode its inode must be kept around. Typically there would be a data member in the inode that would be marked with a special value to denote that the inode is not actually valid. Then every VOP of every concrete filesystem must check its special “doomed” tag and abort operation in an appropriate manner. Only when all references on the vnode are dropped will it and its inode be actually destroyed.

So, this is the first example of FreeBSD VFS trying to handle a common problem by itself and thus gaining complexity, whereas Solaris VFS stays simpler at the cost of deferring complexity to the concrete filesystems.

There are trade offs in both approaches. Generalization reduces code duplication and maintenance, but increases the complexity of the generalized code and reduces flexibility. Leaving the problem to each concrete filesystem increases code duplication and the effort required for developing a new filesystem, but it also allows for greater flexibility of each filesystem implementation.

Part II of this article is coming next month, follow us on Twitter for updates!

At HybridCluster we’re changing the rules for cloud infrastructure: our replication system, built on top of OpenZFS, makes it possible to deliver a cloud with resilience and auto-scaling built-in, rather than needing to be re-engineered from scratch by developers every time they build a cloud app. Give it a try for free today.

xpra - screen for X

MMS: Error

$
0
0

Comments:"MMS: Error"

URL:http://www.nejm.org/doi/full/10.1056/NEJM200207183470319


Our apologies. An error occurred while setting your user cookie. Please set your browser to accept cookies to continue.

NEJM.org uses cookies to improve performance by remembering your session ID when you navigate from page to page. This cookie stores just a session ID; no other information is captured. Accepting the NEJM cookie is necessary to use the website.

 

 

1-800-843-6356 | nejmcust@mms.org

Map Give | The Cause

A VC: VC Pitches In A Year Or Two

$
0
0

Comments:"A VC: VC Pitches In A Year Or Two"

URL:http://www.avc.com/a_vc/2014/01/vc-pitches-in-a-year-or-two.html


Entrepreneur: I plan to launch a better streaming music service. It leverages the data on what you and your friends currently listen to, combines that with the schedule of new music launches and acts that are touring in your city in the coming months and creates playlists of music that you should be listening to in order to find new acts to listen to and go see live.

VC: Well since Spotify, Beats, and Apple have paid all the telcos so that their services are free on the mobile networks, we are concerned that new music services like yours will have a hard time getting new users to use them because the data plan is so expensive. We like you and the idea very much, but we are going to have to pass.

 

Entrepreneur: I plan to launch a service that curates the funniest videos from all across the internet and packages them up in a 30 minute daily video show that people will watch on their phones as they are commuting to work on the subway. It's called SubHumor.

VC: Well since YouTube, Hulu, and Netflix have paid all the telcos so that their services are free via a sponsored data plan, I am worried that it will hard to get users to watch any videos on their phones that aren't being served by YouTube, Hulu, or Netflix. We like you and your idea very much, but we are going to have to pass.

 

Entrepreneur: I plan to launch a photo sharing service where the faster your friends like the photos, the faster they disappear. It's gamified social snapchat.

VC: Well since Facebook, Instagram, Twitter, and Snapchat have paid the telcos so the photos that are served up in their apps don't use up any of the data plan, I worry that users won't want to use any other photo sharing services since they will have to pay high data costs to use them. We love your idea and would have funded it right here in the meeting back in the good old days of the open internet, but we can't do that anymore. We are passing.

 

This is Internet 3.0. With yesterday's court ruling saying that the FCC can not implement the net neutrality rules they adopted a while back, this nightmare is a likely reality. Telcos will pick their preferred partners, subsidize the data costs for those apps, and make it much harder for new entrants to compete with the incumbents.

Ampere to get rational redefinition : Nature News & Comment

$
0
0

Comments:" Ampere to get rational redefinition : Nature News & Comment "

URL:http://www.nature.com/news/ampere-to-get-rational-redefinition-1.14512


PTB

A semiconductor device, seen in a scanning electron micrograph, can measure the flow of single electrons.

Physicists have tracked electrons crossing a semiconductor chip one at a time — an experiment that should at last enable a rational definition of the ampere, the unit of electrical current.

At present, an ampere is defined as the amount of charge flowing per second through two infinitely long wires one metre apart, such that the wires attract each other with a force of 2 × 10−7 newtons per metre of length. That definition, adopted in 1948 and based on a thought experiment that can at best be approximated in the laboratory, is clumsy — almost as much of an embarrassment as the definition of the kilogram, which relies on the fluctuating mass of a 125-year-old platinum-and-iridium cylinder stored at the International Bureau of Weights and Measures (BIPM) in Paris.

The new approach, described in a paper1 posted onto the arXiv server on 19 December, would redefine the amp on the basis of e, a physical constant representing the charge of an electron. Metrologists have long sought such a rational definition. “It’s an enormously challenging thing to try and do and it’s quite an important paper,” says Stephen Giblin, a physicist at the National Physical Laboratory in Teddington, UK.

The result will find favour at a meeting of the BIPM’s General Conference on Weights and Measures in November. There, metrologists will discuss a proposal to redefine the ampere, the kilogram and two other standard (SI) units — the mole and the kelvin — in terms of the physical constants e, Planck’s constant, Avogadro’s constant and Boltzmann’s constant.

PTB

A researcher at PTB in Braunschweig, Germany, installs a semiconductor chip containing a single-electron pump that could help to redefine the ampere.

Two other basic SI units, the metre and the second, have already been redefined in terms of two constants: the speed of light and the frequency at which electrons in caesium atoms transition between energy levels.

In the amp experiment1, physicist Hans Schumacher of the Federal Institute of Physical and Technical Affairs (PTB) in Braunschweig, Germany, and his colleagues made use of a single-electron pump, a device in which voltage pulses prompt electrons to quantum-mechanically tunnel across barriers one at a time. The researchers tracked the paths of individual electrons by detecting changes in the electrical charge stored at points between the barriers. Primitive electron pumps have existed since 1990, but this is the first time that changes in charge have been detected for each hop of an electron.

The pump transferred just a few dozen electrons per second — slow enough to permit precision measurement and thus to provide proof of principle for redefining the amp. But this is only a first step: the set-up would not be practical for calibrating current-measuring ammeters, which need to run at higher currents. The ultimate goal is to create a ‘mise en pratique’ — a standard-setting experiment that can be reproduced in any lab to calibrate measurements of current precisely — so the race is now on to combine Schumacher’s validation method with a higher-current pump.

“It’s impossible to say which is the winning concept.”

In 2012, Giblin pioneered a semiconductor single-electron pump that transferred nearly one billion electrons per second (ref. 2), but he could not track them one by one. Other types of pump include turnstiles, in which electrons tunnel between superconducting wires, and tunnel junctions, in which electrons tunnel between aluminium islands separated by layers of insulating oxide. But these must also be operated at relatively low currents to track single electrons. “It’s impossible to say which is the winning concept,” says Jukka Pekola, a physicist at Aalto University in Espoo, Finland, who reviewed approaches to redefining the amp in 2013 (ref. 3).

Still, the amp is in good shape for the November meeting, as is the kelvin. The charge of the electron and Boltzmann’s constant have both been measured precisely, so both units are ready to be redefined today, says François Piquemal, a physicist at the National Laboratory of Metrology and Testing in Paris.

But the process could be delayed until the 2018 meeting of the BIPM general conference. All four units are intertwined, so the plan is to redefine them all at once, and the kilogram is causing problems. There are two rival approaches to its redefinition: a watt balance, which would balance a test mass against Earth’s gravity in terms of electrical power, and a precise count of atoms in a sphere of silicon. The two approaches give slightly different answers. Piquemal says that the differing approaches need to be reconciled before the units can be redefined.

Why is a minute divided into 60 seconds, an hour into 60 minutes, yet there are only 24 hours in a day?: Scientific American

$
0
0

Comments:"Why is a minute divided into 60 seconds, an hour into 60 minutes, yet there are only 24 hours in a day?: Scientific American"

URL:http://www.scientificamerican.com/article.cfm?id=experts-time-division-days-hours-minutes


Michael A. Lombardi, a metrologist in the Time and Frequency Division at the National Institute of Standards and Technology in Boulder, Colo., takes the case.

In today's world, the most widely used numeral system is decimal (base 10), a system that probably originated because it made it easy for humans to count using their fingers. The civilizations that first divided the day into smaller parts, however, used different numeral systems, specifically duodecimal (base 12) and sexagesimal (base 60).

Thanks to documented evidence of the Egyptians' use of sundials, most historians credit them with being the first civilization to divide the day into smaller parts. The first sundials were simply stakes placed in the ground that indicated time by the length and direction of the resulting shadow. As early as 1500 B.C., the Egyptians had developed a more advanced sundial. A T-shaped bar placed in the ground, this instrument was calibrated to divide the interval between sunrise and sunset into 12 parts. This division reflected Egypt's use of the duodecimal system--the importance of the number 12 is typically attributed either to the fact that it equals the number of lunar cycles in a year or the number of finger joints on each hand (three in each of the four fingers, excluding the thumb), making it possible to count to 12 with the thumb. The next-generation sundial likely formed the first representation of what we now call the hour. Although the hours within a given day were approximately equal, their lengths varied during the year, with summer hours being much longer than winter hours.

Without artificial light, humans of this time period regarded sunlit and dark periods as two opposing realms rather than as part of the same day. Without the aid of sundials, dividing the dark interval between sunset and sunrise was more complex than dividing the sunlit period. During the era when sundials were first used, however, Egyptian astronomers also first observed a set of 36 stars that divided the circle of the heavens into equal parts. The passage of night could be marked by the appearance of 18 of these stars, three of which were assigned to each of the two twilight periods when the stars were difficult to view. The period of total darkness was marked by the remaining 12 stars, again resulting in 12 divisions of night (another nod to the duodecimal system). During the New Kingdom (1550 to 1070 B.C.), this measuring system was simplified to use a set of 24 stars, 12 of which marked the passage of the night. The clepsydra, or water clock, was also used to record time during the night, and was perhaps the most accurate timekeeping device of the ancient world. The timepiece--a specimen of which, found at the Temple of Ammon in Karnak, dated back to 1400 B.C.--was a vessel with slanted interior surfaces to allow for decreasing water pressure, inscribed with scales that marked the division of the night into 12 parts during various months.

Once both the light and dark hours were divided into 12 parts, the concept of a 24-hour day was in place. The concept of fixed-length hours, however, did not originate until the Hellenistic period, when Greek astronomers began using such a system for their theoretical calculations. Hipparchus, whose work primarily took place between 147 and 127 B.C., proposed dividing the day into 24 equinoctial hours, based on the 12 hours of daylight and 12 hours of darkness observed on equinox days. Despite this suggestion, laypeople continued to use seasonally varying hours for many centuries. (Hours of fixed length became commonplace only after mechanical clocks first appeared in Europe during the 14th century.)

Hipparchus and other Greek astronomers employed astronomical techniques that were previously developed by the Babylonians, who resided in Mesopotamia. The Babylonians made astronomical calculations in the sexagesimal (base 60) system they inherited from the Sumerians, who developed it around 2000 B.C. Although it is unknown why 60 was chosen, it is notably convenient for expressing fractions, since 60 is the smallest number divisible by the first six counting numbers as well as by 10, 12, 15, 20 and 30.


The Blackphone

Secret Trans-Pacific Partnership Agreement (TPP) - Environment Consolidated Text

$
0
0

Comments:"Secret Trans-Pacific Partnership Agreement (TPP) - Environment Consolidated Text"

URL:http://wikileaks.org/tpp-enviro/


The following draft Trans-Pacific Partnership (TPP) environment text is without prejudice to the positions of any TPP Party. It responds to the request by TPP Ministers that Canada draft a consolidated text after bilateral consultations with other TPP Parties to determine concerns and redlines and possible landing zones.

This document contains TPP Confidential Information
Modified Handling Authorized
November 24, 2013

 

Chapter SS

Environment

For purposes of this Chapter:

environmental law means any statute or regulation of a Party, or provision thereof, including any that implement its obligations under a multilateral environmental agreement, the primary purpose of which is the protection of the environment, or the prevention of a danger to human life or health, through:

  • the prevention, abatement, or control of the release, discharge, or emission of pollutants or environmental contaminants;
  • the control of environmentally hazardous or toxic chemicals, substances, materials, and wastes, and the dissemination of information related thereto; or
  • the protection or conservation of wild flora or fauna, including endangered species, their habitat, and specially protected natural areas ,

but does not include any statute or regulation, or provision thereof, directly related to worker safety or health, nor any statute or regulation, or provision thereof, the primary purpose of which is managing the subsistence or aboriginal harvesting of natural resources.

statute or regulation means:

[AU: for Australia, an act of the Commonwealth Parliament, or a regulation made by the Governor-General in Council under delegated authority under an Act of the Commonwealth Parliament, that is enforceable at the central level of government.]

[BN: for Brunei, an Act, Order or Rule promulgated pursuant to the Constitution of Brunei Darussalam, enforceable by the Government of His Majesty the Sultan and yang Di-Pertuan of Brunei Darussalam.]

[CA: for Canada, an Act of the Parliament of Canada or regulation made under an Act of the Parliament of Canada that is enforceable by the central level of government.]

[MY: for Malaysia, an act of Parliament or regulation promulgated pursuant to an act of Parliament that is enforceable by action of the federal government.]

[MX: for the United Mexican States, an act of Congress or regulation promulgated pursuant to an act of Congress that is enforceable by action of the federal level of government.]

[PE: for Peru, a law of Congress or Decree or Resolution promulgated by the central level of government to implement a law of Congress that is enforceable by action of the central level of government.]

[US: for the United States, an act of Congress or regulation promulgated pursuant to an act of Congress that is enforceable by action of the central level of government.]

[VN: for Vietnam, a law of the National Assembly, an ordinance of the Standing Committee of the National Assembly, or a regulation promulgated by the central level of government to implement a law of the National Assembly or an ordinance of the Standing Committee of the National Assembly that is enforceable by action of the central level of government.]

Drafter’s note: Language relating to equivalency in scope of coverage is attached. Placement is to be determined.

  • The objectives of this Chapter are to: promote mutually supportive trade and environment policies; promote high levels of environmental protection and effective enforcement of environmental laws; and enhance the capacities of the Parties to address trade-related environmental issues, including through cooperation.
  • Taking account of their respective national priorities and circumstances, the Parties recognize that enhanced cooperation to protect and conserve the environment and sustainably manage their natural resources brings benefits which can contribute to sustainable development, strengthen their environmental governance and complement the objectives of the TPP.
  • The Parties further recognize that it is inappropriate to set or use their environmental laws or other measures in a manner which would constitute a disguised restriction on trade or investment between the Parties.
  • The Parties recognize the importance of mutually supportive trade and environmental policies and practices to improve environmental protection in the furtherance of sustainable development.
  • The Parties recognize the sovereign right of each Party to establish its own levels of domestic environmental protection and its own environmental priorities, and to set, adopt or modify accordingly its environmental laws and policies.
  • Each Party shall strive to ensure that its environmental laws and policies provide for and encourage high levels of environmental protection and shall strive to continue to improve its respective levels of environmental protection.
  • No Party shall fail to effectively enforce its environmental laws through a sustained or recurring course of action or inaction in a manner affecting trade or investment between the Parties, after the date of entry into force of this Agreement.
  • The Parties recognize that each Party retains the right to exercise discretion and to make decisions regarding: (a) investigatory, prosecutorial, regulatory, and compliance matters; and (b) the allocation of environmental enforcement resources with respect to other environmental laws determined to have higher priorities. Accordingly, the Parties understand that with respect to the enforcement of environmental laws a Party is in compliance with paragraph 4 where a course of action or inaction reflects a reasonable exercise of such discretion, or results from a bona fide decision regarding the allocation of such resources in accordance with priorities for enforcement of its environmental laws.
  • Without prejudice to paragraph 2, the Parties recognize that it is inappropriate to encourage trade or investment by weakening or reducing the protections afforded in their respective environmental laws. Accordingly, a Party shall not waive or otherwise derogate from, or offer to waive or otherwise derogate from, such laws in a manner that weakens or reduces the protections afforded in those laws in order to encourage trade or investment between the Parties.
  • Nothing in this Chapter shall be construed to empower a Party’s authorities to undertake environmental law enforcement activities in the territory of another Party.
  • The Parties recognize that multilateral environmental agreements to which they are party play an important role globally and domestically in protecting the environment and that their respective implementation of these agreements is critical to achieving the environmental objectives of these agreements. Accordingly, each Party affirms its commitment to implement the multilateral environmental agreements to which it is a party.
  • The Parties stress the need to enhance mutual supportiveness between trade and environment laws and policies through dialogue between the Parties on trade and environment issues of mutual interest, particularly with respect to negotiations and implementation of relevant multilateral environmental agreements and trade agreements.
  • If a Party is found to be in non-compliance with its obligations under a multilateral environmental agreement through applicable compliance procedures under such agreement, and such non-compliance is in a manner affecting trade or investment between the Parties, any other Party whose trade or investment is affected and is party to the same multilateral environmental agreement may request that the Committee be convened to consider the issue by delivering a written request to each national contact point. The Committee shall convene to consider whether the matter could benefit from cooperative activities under this agreement, with a view to facilitating the relevant Party coming into compliance with its obligations under the multilateral environmental agreement.

Montreal Protocol

  • The Parties recognize that emissions of certain substances can significantly deplete and otherwise modify the ozone layer in a manner that is likely to result in adverse effects on human health and the environment. To that end, each Party affirms its commitment to take measures to control the production and consumption of, and trade in, such substances by implementing its obligations under the Montreal Protocol of Substances that Deplete the Ozone Layer, including its amendments.
  • Each Party shall promote public awareness of its environmental laws, regulations and policies, including enforcement and compliance procedures, by ensuring that relevant information is available to the public.
  • Each Party shall ensure that interested persons residing in or established in the territory of such Party may request the Party’s competent authorities to investigate alleged violations of its environmental laws, and that the competent authorities shall give such requests due consideration, in accordance with the Party’s law.
  • Each Party shall ensure that judicial, quasi-judicial, or administrative proceedings for the enforcement of its environmental laws are available under its law and are fair, equitable, transparent, and comply with due process of law. Any hearings in such proceedings shall be open to the public, except where the administration of justice otherwise requires, and in accordance with its applicable laws.
  • Each Party shall ensure that persons with a recognized interest under its law in a particular matter have appropriate access to proceedings referred to in paragraph 3.
  • Each Party shall provide appropriate sanctions or remedies for violations of its environmental laws for the effective enforcement of those laws. Such sanctions or remedies may include a right to bring an action against the violator directly seeking damages or injunctive relief, or a right to seek governmental action.
  • Each Party shall ensure that, in the establishment of the sanctions or remedies referred to in paragraph 5, appropriate account is taken of relevant factors. Such factors may include the nature and gravity of the violation, damage to the environment, and any economic benefit the violator has derived from the violation.
  • Each Party shall seek to accommodate requests for information regarding the Party’s implementation of this Chapter.
  • Each Party shall make use of existing, or establish new, consultative mechanisms, such as national advisory committees, to seek views on matters related to the implementation of this Chapter. Such mechanisms may comprise persons with relevant experience, as appropriate, including experience in business, natural resource conservation and management, or other environmental matters.
  • Each Party shall provide for the receipt and consideration of written submissions from persons of that Party regarding its implementation of this Chapter. Each Party shall respond in a timely manner to such submissions in writing, and in accordance with domestic procedures, and make the submissions and its responses available to the public, such as by posting on an appropriate public website.
  • Each Party shall make its procedures for the receipt and consideration of written submissions readily accessible and publicly available, such as by posting on an appropriate public website. These procedures may provide that, to be eligible for consideration, the submission should:
    • be in writing in one of the official languages of the Party receiving the submission;
    • clearly identify the person making the submission;
    • provide sufficient information to allow for the review of the submission including any documentary evidence on which the submission may be based;
    • explain how, and to what extent, the issue raised affects trade or investment between the Parties;
    • not raise issues that are the subject of ongoing judicial or administrative proceedings; and
    • indicate whether the matter has been communicated in writing to relevant authorities of the Party and the Party’s response, if any.
  • Each Party shall notify the other Parties of the entity or entities responsible for receiving and responding to any written submissions referred to in paragraph 1 within 180 days after this Agreement enters into force.
  • Where a submission asserts that a Party is failing to effectively enforce its environmental laws and following the provision of the written response by that Party, any other Party may request that the Committee discuss that submission and written response with a view to further understanding the matter raised in the submission and, as appropriate, to consider whether the matter could benefit from cooperative activities.
  • At its first meeting, the Committee shall establish procedures for discussing submissions and responses referred to it. Such procedures may provide for the utilization of experts, or existing institutional bodies, for the purpose of developing a report for the Committee comprised of information on facts relevant to the matter.
  • No later than three years after the date this Agreement enters into force, and thereafter as agreed to by the Parties, the Committee shall prepare a written report for the Commission on the implementation of this Article. For purposes of preparing this report, each Party shall provide a written summary regarding its implementation activities under this Article.

Each Party should encourage enterprises operating within its territory or jurisdiction, to adopt voluntarily, into their policies and practices, principles of corporate social responsibility related to the environment, consistent with internationally recognized standards and guidelines that have been endorsed or are supported by that Party.

  • The Parties recognize that flexible, voluntary mechanisms, such as voluntary auditing and reporting, market-based incentives, voluntary sharing of information and expertise, and public-private partnerships, can contribute to the achievement and maintenance of high levels of environmental protection and complement domestic regulatory measures. The Parties further recognize that such mechanisms should be designed in a manner that maximizes their environmental benefits and avoids the creation of unnecessary barriers to trade.
  • Therefore, in accordance with its domestic laws, regulations or policies, and to the extent it considers appropriate, each Party shall encourage:
    • the use of such flexible and voluntary mechanisms to protect natural resources and the environment in its territory; and
    • its relevant authorities, businesses and business organizations, non-governmental organizations, and other interested persons involved in the development of criteria used in evaluating environmental performance, with respect to such voluntary mechanisms, to continue to develop and improve such criteria.
  • Further, where private sector entities or non-governmental organizations develop voluntary mechanisms for the promotion of products based on their environmental qualities, each Party should encourage those entities and organizations to develop such mechanisms that among other things:
    • are truthful, not misleading and take into account scientific and technical information;
    • where applicable and available, are based on relevant international standards, recommendations or guidelines, and best practices;
    • promote competition and innovation; and
    • do not treat a product less favorably on the basis of origin.
  • The Parties recognize the importance of cooperation as a mechanism for implementation of this Chapter and enhancing its benefits and to strengthen their joint and individual capacities to protect the environment and to promote sustainable development as they strengthen their trade and investment relations.
  • Taking account of their national priorities and circumstances, and available resources, the Parties shall cooperate to address matters of joint or common interest among the participating Parties related to the implementation of this Chapter, where there is mutual benefit from such cooperation. Such cooperation may be carried out on a bilateral or plurilateral basis between or among the Parties and, subject to consensus by the participating Parties, may include non-government bodies or organizations and non-Parties to this Agreement.
  • Each Party shall designate the authority or authorities responsible for cooperation related to the implementation of this Chapter to serve as its national contact point on matters relating to coordination of cooperation activities and shall notify the other Parties in writing within 90 days of entry into force of this Agreement of its contact point. On notifying the other Parties of its contact point, or at any time thereafter through the contact points, a Party may:
    • share its priorities for cooperation with the other Parties, including the objectives of such cooperation;
    • propose cooperation activities related to the implementation of this chapter to another Party or Parties.
  • Where possible and appropriate, the Parties shall seek to complement and utilize their existing cooperation mechanisms and take into account relevant work of regional and international organizations.
  • The Parties agree that cooperation may be undertaken through modes such as dialogues, workshops, seminars, conferences, collaborative programs and projects, and technical assistance to promote and facilitate cooperation and training; the sharing of best practices on policies and procedures; and exchange of experts.
  • In developing cooperative activities and programs, each Party shall, where relevant, identify performance measures and indicators to assist in examining and evaluating the efficiency, effectiveness and progress of specific cooperative activities and programs and share those measures and indicators, as well as the outcome of any evaluation during or following the completion of a cooperative activity or program, with the Parties.
  • The Parties, through their national contact points for cooperation, shall periodically review the implementation and operation of this Article and report their findings, which may include recommendations, to the Committee to inform its review under Article SS.11(3)(c). The Parties, through the Committee, may periodically evaluate the necessity of designating an entity to provide administrative and operational support for cooperative activities. In the event that the Parties agree to establish such an entity, the Parties shall agree on the provision of funds on a voluntary basis to support its operation.
  • Each Party shall promote public participation in the development and implementation, as appropriate, of cooperative activities. This may include activities such as encouraging and facilitating direct contacts and cooperation among relevant entities and the conclusion of arrangements among them for the conduct of cooperative activities under this Chapter.
  • All cooperative activities under this Chapter shall be subject to the availability of funds and of human and other resources, and to the applicable laws and regulations of the participating Parties. The funding of cooperative activities shall be decided by the participating Parties on a case-by-case basis.
  • Each Party shall designate a national contact point from its relevant national authorities within three months of the date of entry into force of this Agreement, in order to facilitate communication among the Parties in the implementation of this Chapter. Changes to the national contact point shall be communicated promptly to the other Parties as they occur.
  • The Parties hereby establish an Environment Committee (“Committee”) which shall comprise senior government representatives, or their designees, of the relevant trade and environment national authorities of each Party responsible for the implementation of this Chapter.
  • The purpose of the Committee is to oversee the implementation of this Chapter and its functions shall be to:
    • provide a forum to discuss and review the implementation of this Chapter;
    • provide periodic reports to the [Trans-Pacific Partnership Commission] regarding implementation of this Chapter;
    • provide a forum to discuss and review cooperative activities pursuant to this Chapter;
    • consider and endeavor to resolve matters referred to it under Article SS.12 [Consultations];
    • coordinate with other Committees under the Agreement as appropriate; and
    • perform any other functions as the Parties may agree.
  • The Committee shall meet within the first year of entry into force of this Agreement. Thereafter, the Committee shall normally meet every two years unless the Committee decides otherwise. The Chair of the Committee and the venue of its meetings shall rotate among each of the Parties in English alphabetical order, unless the Committee decides otherwise.
  • All decisions and reports of the Committee shall be made by consensus, unless otherwise agreed, or unless otherwise provided in this Chapter.
  • All decisions and reports of the Committee shall be made available to the public, unless otherwise decided by consensus.
  • During the fifth year after the entry into force of this Agreement, the Committee shall:
    • review the implementation and operation of this Chapter;
    • report its findings, which may include recommendations, to the Parties and the [Commission]; and
    • undertake subsequent reviews at intervals to be agreed by the Parties.
  • The Committee shall provide for public input on matters relevant to the Committee’s work, as appropriate, and shall hold a public session at each meeting.
  • The Parties recognize the importance of resource efficiency in the implementation of this Chapter and the desirability of utilizing, wherever possible, new technologies to facilitate communication and interaction among the Parties and with the public.
  • The Parties shall at all times endeavor to agree on the interpretation and application of this Chapter, and shall make every effort through dialogue, consultation, exchange of information, and, where appropriate, cooperation to address any matter that might affect the operation of this Chapter.
  • Any Party (“the requesting Party”) may request consultations with any other Party (“the responding Party”) regarding any matter arising under this Chapter by delivering a written request to the national contact point designated in accordance with Article SS.11 (Institutional Arrangements) of this Chapter. The request shall contain information that is specific and sufficient to enable the responding Party receiving the request to respond, including identification of the matter at issue and an indication of the legal basis for the request.
  • The requesting Party shall inform the other Parties through the national contact points, of its request for consultations. A Party other than the requesting or responding Party that considers it has a substantial interest in the matter (“a participating Party”) may participate in the consultations by delivering a written notice to the national contact point of the requesting and responding Parties within seven days of the date of delivery of the request for consultations. The participating party shall include in its notice an explanation of its substantial interest in the matter.
  • Unless they otherwise agree, the requesting and responding Parties (“the consulting Parties”) shall enter into consultations within 30 days after the receipt of the written request.
  • The consulting Parties shall make every effort to arrive at a mutually satisfactory resolution to the matter, which may include appropriate cooperative activities. The consulting Parties may seek advice or assistance from any person or body they deem appropriate in order to examine the matter.
  • If the consulting Parties fail to resolve the matter pursuant to Article SS.12.1 (Environment Consultations), any consulting Party may request that the Committee representatives from the consulting Parties convene to consider the matter by delivering a written request to the national contact point of the other consulting Party and circulating it to the national contact point of other Parties.
  • The Committee representatives from the consulting Parties shall meet no later than 90 days following the delivery of the request and shall seek to resolve the matter including, where appropriate, by gathering relevant scientific and technical information from governmental or non-governmental experts. Committee representatives from any other Party that considers it as a substantial interest in the matter may participate in the consultations.
  • If the consulting Parties have failed to resolve the matter pursuant to Article SS.12.2 (Committee Consultations), any of the consulting Parties may refer the matter to the relevant Ministers of the consulting Parties who shall seek to resolve the matter.
  • Consultations pursuant to Articles SS.12.1, SS.12.2, and SS.12.3 may be held in person or by any technological means available agreed by the consulting Parties. If in person, consultations shall be held in the capital of the responding Party, unless the consulting Parties otherwise agree.
  • Consultations shall be confidential and without prejudice to the rights of any Party in any future proceedings.
  • If the consulting Parties have failed to resolve the matter within 90 days of the request made pursuant to Article SS.12.3 (Ministerial Consultations), the complaining Party may request in writing the establishment of an arbitral tribunal under this Chapter.
  • The complaining Party shall circulate the request to all Parties through the national contact points designated in accordance with Article SS.10 (Institutional Arrangements) of this Chapter.
  • An arbitral tribunal shall be established upon delivery of a request.
  • The complaining Party shall include in the request to establish an arbitral tribunal an identification of the measure or other matter at issue and a brief summary of the legal basis of the complaint sufficient to present the problem clearly.
  • A Party that is eligible under paragraph 1 to request the establishment of an arbitral tribunal regarding the same matter may join the arbitral tribunal proceedings as a complaining Party upon delivery of a written notice to the other Parties. The Party joining the proceedings shall deliver the notice at the earliest possible time and in any event no later than seven days after the date of delivery of the request for the establishment of an arbitral tribunal.
  • Where there is more than one dispute on the same matter arising under this Chapter against a Party, the disputes may be joined, subject to the agreement of all disputing Parties.

Unless the disputing Parties otherwise agree, the terms of reference of the arbitral tribunal constituted under paragraph 1, shall be:

“to examine, in the light of the relevant provisions of the Environment Chapter, the
matter referred to in the request for the establishment of the arbitral tribunal and in any
notice to join the arbitral tribunal proceedings pursuant to Article SS.12.4, and to issue a report, in accordance with Article BBB.16 (Final Report) of Chapter BBB (Dispute Settlement), making recommendations for the resolution of the matter.”

  • For purposes of selecting an arbitral tribunal, the following procedures shall apply:
    • the arbitral tribunal shall comprise three members;
    • within 20 days of receiving the request to establish an arbitral tribunal under Article SS.12.4 Request of the Arbitral Tribunal, the complaining Party or Parties and the responding Party shall each select one arbitrator;
    • if one Party fails to select its arbitrator within such period, the other Party shall select the arbitrator from among qualified individuals who are nationals of the Party that failed to select its arbitrator;
    • the following procedures shall apply to the selection of the chair:
      • the responding Party shall provide the complaining Party with the names of three qualified candidates. The names shall be provided within 20 days of receiving the request to establish the arbitral tribunal;
      • the complaining Party may choose one of the individuals to be the chair or, if the names were not provided or none of the individuals are acceptable, provide the responding Party with the names of three individuals who are qualified to be the chair. Those names shall be provided no later than five days after receiving the names under subparagraph (i) or 25 days after the receipt of the request for the establishment of the arbitral tribunal, whichever is earlier; and
      • the responding Party must choose one of the three individuals to be the chair within five days of receiving the names under subparagraph (ii).
  • Members of the arbitral tribunal shall:
    • have specialized knowledge or expertise in environmental law, issues addressed in this Chapter and, to the extent possible, the resolution of disputes arising under international agreements;
    • be chosen on the basis of objectivity, reliability and sound judgment;
    • be independent of, and not be affiliated with or take instructions from any Party; and
    • comply with a code of conduct established by the Parties under Article BBB.X of Chapter BBB (Dispute Settlement).

The Rules of Procedure under Article BBB.11 (Rules of Procedure {for Arbitral Tribunals}) of Chapter BBB (Dispute Settlement) shall apply to arbitral proceedings under this Chapter.

Drafter’s Note: This provision to be reviewed once Article BBB.11 is agreed.

A Party that is not a disputing Party, and that considers it has a substantial interest in the matter before the arbitral tribunal, shall, on delivery of a written notice to the disputing Parties, be entitled to attend all hearings, to make written submissions, to present views orally to the arbitraltribunal, and to receive written submissions of the disputing Parties. The delivery of the written notice shall occur no later than 10 days after the date of circulation of the request for the establishment of the arbitral tribunal pursuant to paragraph 1 of Article SS.12.4 (Arbitral Tribunal).

At the request of a disputing Party or on its own initiative, the arbitral tribunal may seek information and technical advice from any person or body that it deems appropriate, provided that the disputing Parties so agree and subject to such terms and conditions as the disputing Parties may agree. The disputing Parties shall have an opportunity to comment on any information or advice so obtained.

The arbitral tribunal shall present to the disputing Parties an initial report in accordance with Article BBB.15 (Initial Report) of Chapter BBB (Dispute Settlement). For the purposes of thisChapter, the initial report shall contain:

  • findings of fact;
  • the determination of the arbitral tribunal as to whether the responding Party has failed to comply with its obligations under this Chapter;
  • any other determination requested in the terms of reference;
  • recommendations for the resolution of the dispute; and
  • the rationale for any findings, determinations and recommendations made by the arbitral tribunal.
  • The arbitral tribunal shall present a final report to the disputing Parties in accordance with ArticleBBB.16 (Final Report) of Chapter BBB (Dispute Settlement).

    • If in its final report the arbitral tribunal determines that the responding Party has failed to comply with its obligations under this Chapter, the disputing Parties shall endeavor to agree within 90 days of the public issuance of the final report on a mutually satisfactory action plan pursuant to the determinations and recommendations of the tribunal. The Parties shall notify the Committee of such action plans and its implementation timeframes.
    • The responding Party shall keep the Committee informed in a timely manner through the national contact points of any actions or measures to be implemented pursuant to the determinations and recommendations of the tribunal, including any action plan agreed to pursuant to paragraph 1.
    • For any matter affecting the interpretation or application of this Chapter, the Parties shall only have recourse to the rules and procedures as set out in this Chapter. At any time, the Parties may have recourse to good offices, conciliation, or mediation to resolve that matter.
    • The Parties recognize the importance of conservation and sustainable use of biological diversity and their key role in achieving sustainable development.
    • Accordingly, the Parties are committed to promoting and encouraging the conservation and sustainable use of biological diversity and sharing in a fair and equitable way the benefits arising from the utilization of genetic resources.
    • The Parties reiterate their commitment to, subject to national legislation, respecting, preserving and maintaining the knowledge, innovations, and practices of indigenous and local communities embodying traditional lifestyles relevant for the conservation and sustainable use of biological diversity, and encourage the equitable sharing of the benefitsarising from the utilization of such knowledge, innovations and practices.
    • The Parties recognize the sovereign rights of States over their natural resources, and that the authority to determine access to genetic resources rests with the national governments and is subject to national legislation.
    • The Parties recognize that, subject to national legislation, access to genetic resources for their utilization, where granted, should be subject to the prior informed consent of the Party providing such resources, unless otherwise determined by that Party. The Parties further recognize that benefits arising from the utilization of these genetic resources should be shared in a fair and equitable way. Such sharing should be upon mutually agreed terms.
    • The Parties also recognize the importance of public participation and consultations, as provided for by domestic law or policy, on matters concerning the conservation and sustainable use of biological diversity. Each Party should make publicly available information about its programs and activities, including cooperative programs, related to the conservation and sustainable use of biological diversity.
    • The Parties are committed to enhance their cooperative efforts in areas of mutual interest related to biological diversity, including through Article SS.10 (Cooperation). Cooperation may include, but is not limited to, exchanging information and experiences in areas related to:
      • the conservation and sustainable use of biological diversity;
      • the protection and maintenance of ecosystem and ecosystem services; and
      • the fair and equitable sharing of the benefits arising out of the utilization of genetic resources, including by appropriate access to genetic resources.
    • The Parties recognize that the movement of terrestrial and aquatic invasive alien species across borders through trade-related pathways can adversely affect the environment, economic activities and development, and human health and that prevention, detection, control and, when possible, eradication, of invasive alien species are critical strategies for managing these impacts.
    • Accordingly, the Committee shall coordinate with the Committee on Sanitary and Phytosanitary Measures established under Article X.X of Chapter XXX (Sanitary and Phytosanitary Measures) to identify cooperative opportunities to share information and management experiences on the movement, prevention, detection, control and eradication of invasive alien species, with a view to enhancing efforts to assess and address the risks >and adverse impacts of invasive alien species.
    • The Parties acknowledge climate change as a global concern that requires collective action and recognize the importance of implementation of their respective commitments under the United Nations Framework Convention on Climate Change (UNFCCC) and its related legal instruments.
    • The Parties recognize the desirability that trade and climate change policies be mutually supportive, and that policies and measures to deal with climate change should be cost effective. The Parties further recognize the role that market and non-market approaches can play in achieving climate change objectives.
    • The Parties agree that migration and adaptation actions should reflect domestic circumstances and capabilities, and note efforts underway in a range of international fora to: increase energy efficiency; develop low-carbon technologies and alternative and renewable energy sources; promote sustainable transport and sustainable urban infrastructure development; address deforestation and forest degradation; reduce emissions in international maritime shipping and air transport; improve monitoring, reporting and verification of greenhouse gas emissions; and develop adaptation actions for climate change. The Parties agree to encourage and facilitate cooperation on the complementary, trade-related, aspects of these efforts in areas of mutual interest.
    • The Parties recognize that there are a suite of economic and environmental policy instruments that can play a role in achieving domestic climate change objectives and in helping achieve their international climate change commitments. The Parties acknowledge the value of sharing information and experiences in developing and implementing such instruments. Accordingly, where relevant and appropriate, the Parties agree to discuss matters such as:
      • best practices and lessons learned in designing, implementing, and operating mechanisms to reduce carbon emissions, including market and non-market measures;
      • best practices in the design, implementation and enforcement of regulatory instruments; and
      • best practices and lessons learned to enhance the transparency and accuracy of such instruments.
    • Activities pursuant to paragraphs 3 and 4 may, at the discretion of the participating Parties and as appropriate, involve other governments in the Asia-Pacific region with an interest in such mechanisms, as well as the private sector and non-governmental organizations.
    • The Parties recognize their respective commitments in APEC to rationalize and phase out over the medium term inefficient fossil fuel subsidies that encourage wasteful consumption, while recognizing the importance of providing those in need with essential energy services. Accordingly, the Parties agree to undertake, as appropriate, cooperative and capacity building activities designed to facilitate effective implementation of these commitments, including in applying the APEC Voluntary Reporting Mechanism.
    • The Parties acknowledge their role as major consumers, producers and traders of fisheries products and the importance of the marine fisheries sector to their development and to the livelihoods of their fishing communities, including artisanal or small-scale fisheries. The Parties also acknowledge that the fate of marine capture fisheries is an urgent resource problem facing the international community. Accordingly, the Parties recognize the importance of taking measures aimed at the conservation and the sustainable management of fisheries.
    • In this regard, the Parties acknowledge that inadequate fisheries management, fisheries subsidies that contribute to overfishing and overcapacity, and Illegal, Unreported and Unregulated (IUU) fishing can have significant negative impacts on trade, development and the environment and, thus recognize the need for individual and collective action to address the problems of overfishing and unsustainable utilization of fisheries resources.
    • Accordingly, each Party shall seek to operate a fisheries management system that regulates marine wild capture fishing and that is designed to prevent overfishing and overcapacity, to reduce bycatch of non-target species and juveniles, including through the regulation of fishing gear that results in bycatch and of fishing in areas where bycatch is likely to occur, and to promote the recovery of overfished stocks for all marine fisheries in which its persons conduct fishing activities. Such a management system shall be based on internationally recognized best practices for fisheries management and conservation as reflected in the relevant provisions of international instruments aimed at ensuring the sustainable use and conservation of marine species.
    • Each Party’s fisheries management system shall, based on the best scientific evidence available, promote the long-term conservation of sharks, marine turtles, seabirds, and marine mammals, through the implementation and effective enforcement of conservation and management measures that may include, as appropriate, the collection of species specific data, fisheries bycatch mitigation measures, catch limits, and prohibitions, such as on finning and commercial whaling.
    • The Parties also recognize the importance of protecting and preserving the marine environment. To that end, each Party affirms its commitment to take measures to prevent the pollution of the marine environment by implementing its obligations under the International Convention for the Prevention of Pollution from Ships, 1973, as modified by the Protocol of 1978 relating thereto and by the Protocol of 1997 (MARPOL) and its associated Annexes.
    • The Parties recognize that the implementation of a fisheries management system that is designed to prevent overfishing and overcapacity and to promote the recovery of overfished stocks must include the control, reduction and eventual elimination of all subsidies that contribute to overfishing and overcapacity. To that end, no Party shall grant or maintain any of the following subsidies within the meaning of Article 1.1 of the SCM Agreement that are specific within the meaning of Article 2 of the SCM Agreement:
      • subsidies that target the fishing of fish stocks that are in an overfished condition; and
      • subsidies provided to any fishing vessel while listed by the flag State or a relevant Regional Fisheries Management Organization or Arrangement for illegal, unreported or unregulated fishing in accordance with the rules and procedures of such organization or arrangement and in conformity with international law.
    • Subsidy programs established by a Party before the entry into force of this Agreement and which are inconsistent with paragraph 6 (a) shall be brought into conformity with that paragraph as soon as possible and no later than [X year/s] of the date of entry into force of this Agreement.
    • In relation to subsidies that are not prohibited by paragraph 6 (a) and (b), and taking into consideration their social and developmental concerns, including food security concerns, each Party shall make best efforts to refrain from introducing new, or extending or enhancing existing, subsidies within the meaning of Article 1.1 of the SCM Agreement, to the extent they are specific within the meaning of Article 2 of the SCM Agreement, that contribute to overfishing or overcapacity.
    • With a view to achieving the objective of eliminating subsidies that contribute to overfishing and overcapacity, the Parties shall review the disciplines in paragraph 6 at regular meetings of the Committee.
    • Each Party shall notify to the other Parties, by the first anniversary of the entry into force of this Agreement and every two years thereafter, any subsidy within the meaning of Article 1.1 of the SCM Agreement which is specific within the meaning of Article 2 of the SCM Agreement, and that the Party grants or maintains to persons engaged in fishing or fishing related activities.
    • Such notifications shall cover subsidies provided within the previous two-year period and should include the following information, in addition to information required under Article 25.3 of the SCM Agreement the SCM notification process, to the extent possible, the following information:
      • program name;
      • legal authority for the program;
      • catch data for the species targeted by the subsidy;
      • status of the fish stocks targeted by the subsidy (e.g. overexploited, depleted, fully exploited, recovering, underexploited);
      • fleet capacity in the fishery for which the subsidy is provided;
      • conservation and management measures in place in the relevant fishery; and
      • total imports/exports per species.
    • Each Party shall also provide, to the extent possible, information in relation to other fisheries subsidies that the Party grants or maintains not covered by paragraph 6 above, in particular fuel subsidies.
    • A Party may request additional information from the notifying Party regarding such notifications. The notifying Party shall respond to such requests as quickly as possible and in a comprehensive manner.
    • The Parties recognize the importance of concerted international action to address IUU fishing, as reflected in regional and international instruments, and shall endeavor to improve cooperation internationally in this regard, including with and through competent international organizations.
    • In support of efforts to combat IUU fishing practices and to help deter trade in products from species harvested from such practices, each Party shall:
      • cooperate with other Parties to identify needs and build capacity that would support the implementation of this Article;
      • support monitoring, control, surveillance, compliance, and enforcement systems, including by adopting, reviewing, or revising, as appropriate, measures to deter vessels flying its flag and its nationals from engaging in IUU activities and measures to address the transshipment of fish or fish products caught through IUU activities;
      • implement port state measures;
      • strive to act consistently with relevant conservation and management measures adopted by regional fisheries management organizations of which it is not a member so as not to undermine those measures; and
      • endeavor not to undermine catch or trade documentation schemes operated by regional fisheries management organizations (RFMO) or arrangements (RFMA) or an international organization that has in its scope the management of shared fisheries resources, including straddling and highly migratory species, where such Party is not a Member of those organizations or arrangements.
    • Consistent with Article ZZ.2.2 (Publication) of Chapter XXX (Transparency), a Party shall, to the extent possible, provide other Parties the opportunity to comment on proposed measures designed to prevent trade in fisheries products resulting from IUU fishing.
    • The Parties affirm the importance of combating the illegal take of, and illegal trade in, wild fauna and flora, and acknowledge that such trade undermines efforts to conserve and sustainably manage such natural resources, distorts legal trade in wild fauna and flora, and reduces the economic and environmental value of these natural resources.
    • Accordingly, each Party affirms its commitment to take measures to ensure that international trade of wild flora and fauna does not threaten the survival of such species by implementing its obligations under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES)
    • The Parties commit to promote conservation and to combat the illegal take of, and illegal trade in, wild fauna and flora. To that end, the Parties shall:
      • exchange information and experiences on issues of mutual interest related to combating the illegal take of, and illegal trade in, wild fauna and flora, including combating illegal logging and associated illegal trade, and promoting the legal trade in associated products; and
      • undertake, as appropriate, joint activities on conservation issues of mutual interest, including through relevant regional and international fora.
    • Each Party further commits to:
      • take appropriate measures to protect and conserve wild fauna and flora that are at risk within its territory, including measures to conserve the integrity of designated natural protected areas;
      • maintain or strengthen government capacity and institutional frameworks to promote sustainable forest management and wild fauna and flora conservation, and endeavor to enhance public participation and transparency therein; and
      • endeavor to develop and strengthen cooperation and consultation with interested non-governmental entities in order to enhance implementation of measures to combat the illegal take of or illegal trade in wild fauna and flora.
    • In a further effort to combat the illegal take of, and illegal trade in, wild fauna and flora, including parts and products thereof, each Party shall adopt or maintain appropriate measures that allow it to take action to prohibit the trade, transshipment or transaction within its territory of wild fauna and flora that, based on credible evidence, were taken or traded in violation of that Party’s law or a foreign law, the primary purpose of which is to conserve, protect or manage wild fauna or flora. Such measures should include sanctions or penalties at levels which act as a deterrent to such trade, transshipments or transaction.
    • The Parties recognize that each Party retains the right to exercise investigatory and enforcement discretion in its implementation of paragraph 5, including by taking into account in relation to each situation the strength of the available evidence and the seriousness of the suspected violation. In addition, the Parties recognize that in implementing paragraph 5, each Party retains the right to make decisions regarding the allocation of enforcement resources.
    • In order to promote the widest measure of law enforcement cooperation and information sharing among the Parties to combat the illegal take of or illegal trade in wild fauna andflora, the Parties shall endeavor to identify opportunities, consistent with their respectivedomestic law and in accordance with applicable international agreements, to enhance lawenforcement cooperation and information sharing, for example by creating and participating in law enforcement networks.
      • The Parties recognize the importance of trade and investment in environmental goods and services as a means of improving environmental and economic performance and addressing global environmental challenges.
      • Accordingly, each Party has, consistent with its national circumstances, eliminated all customs duties upon entry into force of this Agreement on a wide range of environmental goods and as soon as possible on all other environmental goods.
      • Furthermore, in recognition of the importance of environmental services in supporting environmental goods trade and delivering benefits in their own right, each Party has, consistent with national circumstances, limited its restrictions on trade in environmental services, including environmental service suppliers, in accordance with Chapter XX (Investment), Chapter XX (Cross Border Trade in Services), and Chapter XX (Temporary Entry for Business Persons).
      • The Committee shall consider issues identified by Parties related to the trade in environmental goods and services, including issues identified as potential non-tariff barriers to such trade. The Parties shall endeavor to address any potential barriers to trade that may be identified by a Party, including by working through the Committee and in conjunction with other relevant TPP Committees, as appropriate.
      • The Parties may develop bilateral and plurilateral cooperative projects on environmental goods and services to address current and future global trade-related environmental challenges.
      • Before initiating dispute settlement under the Agreement for a matter arising under Article SS.3 [effective enforcement obligation and non-derogation], a Party (“the initiating Party”) shall consider whether it maintains environmental laws that are substantially equivalent in scope to those that would be the subject of the dispute and exercise restraint in taking recourse to dispute settlement under this Agreement with respect to any laws for which it has no substantially equivalent obligation.
      • Where an initiating Party has requested consultation with another Party (“the responding Party”) under Article SS.12 [Environment Government Consultations] for a matter arising under Article SS.3 [effective enforcement and non-derogation], and the responding Party considers that the initiating Party does not maintain environmental laws that are substantially equivalent in scope to those that would be the subject of the dispute, the Parties shall discuss the issue during the consultations.

certificates - How does SSL work? - Information Security Stack Exchange

$
0
0

Comments:"certificates - How does SSL work? - Information Security Stack Exchange"

URL:http://security.stackexchange.com/questions/20803/how-does-ssl-work


Since the general concept of SSL has already been covered into some other questions (e.g. this one and that one), this time I will go for details. Details are important. This answer is going to be somewhat verbose.

SSL is a protocol with a long history and several versions. First prototypes came from Netscape, when they were developing the first versions of their flagship browser, Netscape Navigator (this browser killed off Mosaic in the early times of the Browser Wars, which are still raging, albeit with new competitors). Version 1 has never been made public so we do not know how it looked like. SSL version 2 is described in a draft which can be read there; it has a number of weaknesses, some of them rather serious, so it is deprecated and newer SSL/TLS implementations do not support it (while older deactivated by default). I will not speak of SSL version 2 any further, except as an occasional reference.

SSL version 3 (which I will call "SSLv3") was an enhanced protocol which still works today and is widely supported. Although still a property of Netscape Communications (or whoever owns that nowadays), the protocol has been published as an "historical RFC" (RFC 6101). Meanwhile, the protocol has been standardized, with a new name in order to avoid legal issues; the new name is TLS.

Three versions of TLS have been produced to far, each with its dedicated RFC: TLS 1.0, TLS 1.1 and TLS 1.2. They are internally very similar with each other, and with SSLv3, to the point that an implementation can easily support SSLv3 and all three TLS versions with at least 95% of the code being common. Still internally, all versions are designated by a version number with the major.minor format; SSLv3 is then 3.0, while the TLS versions are, respectively, 3.1, 3.2 and 3.3. Thus, it is no wonder that TLS 1.0 is sometimes called SSL 3.1 (and it is not incorrect either). SSL 3.0 and TLS 1.0 differ by only some minute details. TLS 1.1 and 1.2 are not yet widely supported, although there is impetus for that, because of possible weaknesses (see below, for the "BEAST attack"). SSLv3 and TLS 1.0 are supported "everywhere" (even IE 6.0 knows them).

SSL aims at providing a secure bidirectional tunnel for arbitrary data. Consider TCP, the well known protocol for sending data over the Internet. TCP works over the IP "packets" and provides a bidirectional tunnel for bytes; it works for every byte values and send them into two streams which can operate simultaneously. TCP handles the hard work of splitting the data into packets, acknowledging them, reassembling them back into their right order, while removing duplicates and reemitting lost packets. From the point of view of the application which uses TCP, there are just two streams, and the packets are invisible; in particular, the streams are not split into "messages" (it is up to the application to take its own encoding rules if it wishes to have messages, and that's precisely what HTTP does).

TCP is reliable in the presence of "accidents", i.e. transmission errors due to flaky hardware, network congestion, people with smartphones who walk out range of a given base station, and other non-malicious events. However, an ill-intentioned individual (the "attacker") with some access to the transport medium could read all the transmitted data and/or alter it intentionally, and TCP does not protect against that. Hence SSL.

SSL assumes that it works over a TCP-like protocol, which provides a reliable stream; SSL does not implement reemission of lost packets and things like that. The attacker is supposed to be in power to disrupt communication completely in an unavoidable way (for instance, he can cut the cables) so SSL's job is to:

  • detect alterations (the attacker must not be able to alter the data silently);
  • ensure data confidentiality (the attacker must not gain knowledge of the exchanged data).

SSL fulfills these goals to a large (but not absolute) extent.

SSL is layered and the bottom layer is the record protocol. Whatever data is sent in a SSL tunnel is split into records. Over the wire (the underlying TCP socket or TCP-like medium), a record looks like this:

HH V1:V2 L1:L2 data

where:

  • HH is a single byte which indicates the type of data in the record. Four types are defined: change_cipher_spec (20), alert (21), handshake (22) and application_data (23).
  • V1:V2 is the protocol version, over two bytes. For all versions currently defined, V1 has value 0x03, while V2 has value 0x00 for SSLv3, 0x01 for TLS 1.0, 0x02 for TLS 1.1 and 0x03 for TLS 1.2.
  • L1:L2 is the length of data, in bytes (big-endian convention is used: the length is 256*L1+L2). The total length of data cannot exceed 18432 bytes, but in practice it cannot even reach that value.

So a record has a five-byte header, followed by at most 18 kB of data. The data is where symmetric encryption and integrity checks are applied. When a record is emitted, both sender and receiver are supposed to agree on which cryptographic algorithms are currently applied, and with which keys; this agreement is obtained through the handshake protocol, described in the next section. Compression, if any, is also applied at that point.

In full details, the building of a record works like this:

  • Initially, there are some bytes to transfer; these are application data or some other kind of bytes. This payload consists of at most 16384 bytes, but possibly less (a payload of length 0 is legal, but it turns out that Internet Explorer 6.0 does not like that at all).
  • The payload is then compressed with whatever compression algorithm is currently agreed upon. Compression is stateful, and thus may depend upon the contents of previous records. In practice, compression is either "null" (no compression at all) or "Deflate" (RFC 3749), the latter being currently courteously but firmly shown the exit door in the Web context, due to the recent CRIME attack. Compression aims at shortening data, but it must necessarily expand it slightly in some unfavourable situations (due to the pigeonhole principle). SSL allows for an expansion of at most 1024 bytes. Of course, null compression never expands (but never shortens either); Deflate will expand by at most 10 bytes, if the implementation is any good.
  • The compressed payload is then protected against alterations and encrypted. If the current encryption-and-integrity algorithms are "null", then this step is a no-operation. Otherwise, a MAC is appended, then some padding (depending on the encryption algorithm), and the result is encrypted. These steps again induce some expansion, which the SSL standard limits to 1024 extra bytes (combined with the maximum expansion from the compression step, this brings us to the 18432 bytes, to which we must add the 5-byte header).

The MAC is, usually, HMAC with one of the usual hash functions (mostly MD5, SHA-1 or SHA-256)(with SSLv3, this is not the "true" HMAC but something very similar and, to the best of our knowledge, as secure as HMAC). Encryption will use either a block cipher in CBC mode, or the RC4 stream cipher. Note that, in theory, other kinds of modes or algorithms could be employed, for instance one of these nifty modes which combine encryption and integrity checks; there are even some RFC for that. In practice, though, deployed implementations do not know of these yet, so they do HMAC and CBC. Crucially, the MAC is first computed and appended to the data, and the result is encrypted. This is MAC-then-encrypt and it is actually not a very good idea. The MAC is computed over the concatenation of the (compressed) payload and a sequence number, so that an industrious attacker may not swap records.

The handshake is a protocol which is played within the record protocol. Its goal is to establish the algorithms and keys which are to be used for the records. It consists of messages. Each handshake message begins with a four-byte header, one byte which describes the message type, then three bytes for the message length (big-endian convention). The successive handshake messages are then sent with records tagged with the "handshake" type (first byte of the header of each record has value 22).

Note the layers: the handshake messages, complete with four-byte header, are then sent as records, and each record also has its own header. Furthermore, several handshake messages can be sent within the same record, and a given handshake message can be split over several records. From the point of view of the module which builds the handshake messages, the "records" are just a stream on which bytes can be sent; it is oblivious to the actual split of that stream into records.

Full Handshake

Initially, client and server "agree upon" null encryption with no MAC and null compression. This means that the record they will first send will be sent as cleartext and unprotected.

First message of a handshake is a ClientHello. It is the message by which the client states its intention to do some SSL. Note that "client" is a symbolic role; it means "the party which speaks first". It so happens that in the HTTPS context, which is HTTP-within-SSL-within-TCP, all three layers have a notion of "client" and "server", and they all agree (the TCP client is also the SSL client and the HTTP client), but that's kind of a coincidence.

The ClientHello message contains:

  • the maximum protocol version that the client wishes to support;
  • the "client random" (32 bytes, out of which 28 are suppose to be generated with a cryptographically strong number generator);
  • the "session ID" (in case the client wants to resume a session in an abbreviated handshake, see below);
  • the list of "cipher suites" that the client knows of, ordered by client preference;
  • the list of compression algorithms that the client knows of, ordered by client preference;
  • some optional extensions.

A cipher suite is a 16-bit symbolic identifier for a set of cryptographic algorithms. For instance, the TLS_RSA_WITH_AES_128_CBC_SHA cipher suite has value 0x002F, and means "records use HMAC/SHA-1 and AES encryption with a 128-bit key, and the key exchange is done by encrypting a random key with the server's RSA public key".

The server responds to the ClientHello with a ServerHello which contains:

  • the protocol version that the client and server will use;
  • the "server random" (32 bytes, with 28 random bytes);
  • the session ID for this connection;
  • the cipher suite that will be used;
  • the compression algorithm that will be used;
  • optionally, some extensions.

The full handshake looks like this:

 Client Server
 ClientHello -------->
 ServerHello
 Certificate*
 ServerKeyExchange*
 CertificateRequest*<-------- ServerHelloDone
 Certificate*
 ClientKeyExchange
 CertificateVerify*
 [ChangeCipherSpec]
 Finished -------->
 [ChangeCipherSpec]<-------- Finished
 Application Data <-------> Application Data

(This schema has been shamelessly copied from the RFC.)

We see the ClientHello and ServerHello. Then, the server sends a few other messages, which depend on the cipher suite and some other parameters:

  • Certificate: the server's certificate, which contains its public key. More on that below. This message is almost always sent, except if the cipher suite mandates a handshake without a certificate.
  • ServerKeyExchange: some extra values for the key exchange, if what is in the certificate is not sufficient. In particular, the "DHE" cipher suites use an ephemeral Diffie-Hellman key exchange, which requires that message.
  • CertificateRequest: a message requesting that the client also identifies itself with a certificate of its own. This message contains the list of names of trust anchors (aka "root certificates") that the server will use to validate the client certificate.
  • ServerHelloDone: a marker message (of length zero) which says that the server is finished, and the client should now talk.

The client must then respond with:

  • Certificate: the client certificate, if the server requested one. There are subtle variations between versions (with SSLv3, the client must omit this message if it does not have a certificate; with TLS 1.0+, in the same situation, it must send a Certificate message with an empty list of certificates).
  • ClientKeyExchange: the client part of the actual key exchange (e.g. some random value encrypted with the server RSA key).
  • CertificateVerify: a digital signature computed by the client over all previous handshake messages. This message is sent when the server requested a client certificate, and the client complied. This is how the client proves to the server that it really "owns" the public key which is encoded in the certificate it sent.

Then the client sends a ChangeCipherSpec message, which is not a handshake message: it has its own record type, so it will be sent in a record of its own. Its contents are purely symbolic (a single byte of value 1). This message marks the point at which the client switches to the newly negotiated cipher suite and keys. The subsequent records from the client will then be encrypted.

The Finished message is a cryptographic checksum computed over all previous handshake messages (from both the client and server). Since it is emitted after the ChangeCipherSpec, it is also covered by the integrity check and the encryption. When the server receives that message and verifies its contents, it obtains a proof that it has indeed talked to the same client all along. This message protects the handshake from alterations (the attacker cannot modify the handshake messages and still get the Finished message right).

The server finally responds with its own ChangeCipherSpec then Finished. At that point, the handshake is finished, and the client and server may exchange application data (in encrypted records tagged as such).

To remember: the client suggests but the server chooses. The cipher suite is in the hands of the server. Courteous servers are supposed to follow the preferences of the client (if possible), but they can do otherwise and some actually do (e.g. as part of protection against BEAST).

Abbreviated Handshake

In the full handshake, the server sends a "session ID" (i.e. a bunch of up to 32 bytes) to the client. Later on, the client can come back and send the same session ID as part of his ClientHello. This means that the client still remembers the cipher suite and keys from the previous handshake and would like to reuse these parameters. If the server also remembers the cipher suite and keys, then it copies that specific session ID in its ServerHello, and then follows the abbreviated handshake:

 Client Server
 ClientHello -------->
 ServerHello
 [ChangeCipherSpec]<-------- Finished
 [ChangeCipherSpec]
 Finished -------->
 Application Data <-------> Application Data

The abbreviated handshake is shorter: less messages, no asymmetric cryptography business, and, most importantly, reduced latency. Web browsers and servers do that a lot. A typical Web browser will open a SSL connection with a full handshake, then do abbreviated handshakes for all other connections to the same server: the other connections it opens in parallel, and also the subsequent connections to the same server. Indeed, typical Web servers will close connections after 15 seconds of inactivity, but they will remember sessions (the cipher suite and keys) for a lot longer (possibly for hours or even days).

There are several key exchange algorithms which SSL can use. This is specified by the cipher suite; each key exchange algorithm works with some kinds of server public key. The most common key exchange algorithms are:

  • RSA: the server's key is of type RSA. The client generates a random value (the "pre-master secret" of 48 bytes, out of which 46 are random) and encrypts it with the server's public key. There is no ServerKeyExchange.
  • DHE_RSA: the server's key is of type RSA, but used only for signature. The actual key exchange uses Diffie-Hellman. The server sends a ServerKeyExchange message containing the DH parameters (modulus, generator) and a newly-generated DH public key; moreover, the server signs this message. The client will respond with a ClientKeyExchange message which also contains a newly-generated DH public key. The DH yields the "pre-master secret".
  • DHE_DSS: like DHE_RSA, but the server has a DSS key ("DSS" is also known as "DSA"). DSS is a signature-only algorithm.

Less commonly used key exchange algorithms include:

  • DH: the server's key is of type Diffie-Hellman (we are talking of a certificate which contains a DH key). This used to be "popular" in an administrative way (US federal government mandated its use) when the RSA patent was still active (this was during the previous century). Despite the bureaucratic push, it was never as widely deployed as RSA.
  • DH_anon: like the DHE suites, but without the signature from the server. This is a certificate-less cipher suite. By construction, it is vulnerable to Man-in-the-Middle attacks, thus very rarely enabled at all.
  • PSK: pre-shared key cipher suites. The symmetric-only key exchange, building on a pre-established shared secret.
  • SRP: application of the SRP protocol which is a Password Authenticated Key Exchange protocol. Client and certificate authenticate each other with regards to a shared secret, which can be a low-entropy password (whereas PSK requires a high-entropy shared secret). Very nifty. Not widely supported yet.
  • An ephemeral RSA key: like DHE but with a newly-generated RSA key pair. Since generating RSA keys is expensive, this is not a popular option, and was specified only as part of "export" cipher suites which complied to the pre-2000 US export regulations on cryptography (i.e. RSA keys of at most 512 bits). Nobody does that nowadays.
  • Variants of the DH* algorithms with elliptic curves. Very fashionable. Should become common in the future.

Digital certificates are vessels for asymmetric keys. They are intended to solve key distribution. Namely, the client wants to use the server's public key. The attacker will try to make the client use the attacker's public key. So the client must have a way to make sure that it is using the right key.

SSL is supposed to use X.509. This is a standard for certificates. Each certificate is signed by a Certification Authority. The idea is that the client inherently knows the public keys of a handful of CA (these are the "trust anchors" or "root certificates"). With these keys, the client can verify the signature computed by a CA over a certificate which has been issued to the server. This process can be extended recursively: a CA can issue a certificate for another CA (i.e. sign the certificate structure which contains the other CA name and key). A chain of certificates beginning with a root CA and ending with the server's certificate, with intermediate CA certificates in between, each certificate being signed relatively to the public key which is encoded in the previous certificate, is called, unimaginatively, a certificate chain.

So the client is supposed to do the following:

  • Get a certificate chain ending with the server's certificate. The Certificate message from the server is supposed to contain, precisely, such a chain.
  • Validate the chain, i.e. verifying all the signatures and names and the various X.509 bits. Also, the client should check revocation status of all the certificates in the chain, which is complex and heavy (Web browsers now do it, more or less, but it is a recent development).
  • Verify that the intended server name is indeed written in the server's certificate. Because the client does not only want to use a validated public key, it also wants to use the public key of a specific server. See RFC 2818 for details on how this is done in a HTTPS context.

The certification model with X.509 certificates has often been criticized, not really on technical grounds, but rather for politico-economic reasons. It concentrates validation power into the hands of a few players, who are not necessarily well-intentioned, or at least not always competent. Now and again, proposals for other systems are published (e.g. Convergence or DNSSEC) but none has gained wide acceptance (yet).

For certificate-based client authentication, it is entirely up to the server to decide what to do with a client certificate (and also what to do with a client who declined to send a certificate). In the Windows/IIS/Active Directory world, a client certificate should contain an account name as a "User Principal Name" (encoded in a Subject Alt Name extension of the certificate); the server looks it up in its Active Directory server.

Handshake Again

Since a handshake is just some messages which are sent as records with the current encryption/compression conventions, nothing theoretically prevents a SSL client and server from doing a second handshake within an established SSL connection. And, indeed, it is supported and it happens in practice.

At any time, the client or the server can initiate a new handshake (the server can send a HelloRequest message to trigger it; the client just sends a ClientHello). A typical situation is the following:

  • An HTTPS server is configured to listen to SSL requests.
  • A client connects and a handshake is performed.
  • Once the handshake is done, the client sends its "applicative data", which consists of a HTTP request. At that point (and at that point only), the server learns the target path. Up to that point, the URL which the client wishes to reach was unknown to the server (the server might have been made aware of the target server name through a Server Name Indication SSL extension, but this does not include the path).
  • Upon seeing the path, the server may learn that this is for a part of its data which is supposed to be accessed only by clients authenticated with certificates. But the server did not ask for a client certificate in the handshake (in particular because not-so-old Web browsers displayed freakish popups when asked for a certificate, in particular if they did not have one, so a server would refrain from asking a certificate if it did not have good reason to believe that the client has one and knows how to use it).
  • Therefore, the server triggers a new handshake, this time requesting a certificate.

There is an interesting weakness in the situation I just described; see RFC 5746 for a workaround. In a conceptual way, SSL transfers security characteristics only in the "forward" way. When doing a new handshake, whatever could be known about the client before the new handshake is still valid after (e.g. if the client had sent a good username+password within the tunnel) but not the other way round. In the situation above, the first HTTP request which was received before the new handshake is not covered by the certificate-based authentication of the second handshake, and it would have been chosen by he attacker ! Unfortunately, some Web servers just assumed that the client authentication from the second handshake extended to what was sent before that second handshake, and it allowed some nasty tricks from the attacker. RFC 5746 attempts at fixing that.

Alert messages are just warning and error messages. They are rather uninteresting except when they could be subverted from some attacks (see later on).

There is an important alert message, called close_notify: it is a message which the client or the server sends when it wishes to close the connection. Upon receiving this message, the server or client must also respond with a close_notify and then consider the tunnel to be closed (but the session is still valid, and can be reused in an ulterior abbreviated handshake). The interesting part is that these alert messages are, like all other records, protected by the encryption and MAC. Thus, the connection closure is covered by the cryptographic umbrella.

This is important in the context of (old) HTTP, where some data can be sent by the server without an explicit "content-length": the data extends until the end of the transport stream. Old HTTP with SSLv2 (which did not have the close_notify) allowed an attacker to force a connection close (at the TCP level) which the client would have taken for a normal close; thus, the attacker could truncate the data without being caught. This is one of the problems with SSLv2 (arguably, the worst) and SSLv3 fixes it. Note that "modern" HTTP uses "Content-Length" headers and/or chunked encoding, which is not vulnerable to such truncation, even if the SSL layer allowed it. Still, it is nice to know that SSL offers protection on closure events.

There is a limit on Stack Exchange answer length, so the description of some attacks on SSL will be in another answer (besides, I have some pancakes to cook). Stay tuned.

Living a High-DPI desktop lifestyle can be painful - Scott Hanselman

$
0
0

Comments:"Living a High-DPI desktop lifestyle can be painful - Scott Hanselman"

URL:http://www.hanselman.com/blog/LivingAHighDPIDesktopLifestyleCanBePainful.aspx


I've been using this Lenovo Yoga 2 Pro for the last few weeks, and lemme tell you, it's lovely. It's the perfect size, it weighs nothing, touch screen, fast SSD, it's thinner than the X1 Carbon Touch that is my primary machine, and it just feels right.

It also has about the nicest screen I've ever seen on a Windows Laptop.

Except. This thing runs at 3200x1800. That's FOUR of my 1600x900 ThinkPad X1 Carbon Touch screens.

To be clear, full screen apps (Windows Store apps) almost universally look great. The text is clear, there's nary a pixel in sight. The whole full-screen Windows Store ecosystem seems to work nicely with high-DPI displays. And that makes sense, as it appears they've put a LOT of thought into high-dpi with Windows 8.1. I've changed a few settings on my 1080p Surface 2 in order to take better advantage of High-DPI and run a more apps simultaneously, in fact.

Also, note the checkbox that lets you set different scaling levels for difference displays, so you can keep your laptop at high-res and an external monitor at another for example.

It's the Desktop where I get into trouble. First, let's look at the display at "small fonts."

NOTE: This is NOT the Default setting. The default is smart about the size of your screen and DPI and always tries to get the fonts looking the right size. I've changed the default to 100% to illustrate the massive number of pixels here.

3200x1800 is SO high res, that when you're running it at Small Fonts, well, a picture is worth a million pixels, right? Go ahead, click it, I'll wait. And you will also, it's 3 megs.

Many, if not most apps work fine in the High-DPI desktop world. It's a little hard to get the point across in a blog post of screenshots because you, Dear Reader, are going to be reading this on a variety of displays. But I'll try.

Problems happen when applications either totally don't think about High-DPI displays, or more commonly, they kind of think about them.

You can say all this talk of High-DPI is a problem with Windows, but I think it's a problem with the app developers. The documentation is clear on High-DPI and developers need to test, include appropriate resources or don't claim to support high-dpi. I have a number of older Windows apps that look great on this display. They are simply scaled at 2x. Sure, they may be a little blurry (they have been scaled 2x) but they are all 100% useable.

NOTE: There's a very technical session on getting high-dpi to look good in Windows Desktop apps at BUILD. The Video is here.

Here's a few examples that have caused me pain in just the last week, as well as some Good Citizen apps that look great at High-DPI.

Examples of Poor High-DPI behavior

Let's start with Windows Live Writer, one of my favorite apps and the app I'm using to write this post. It almost looks great, presumably because it's a WPF application and WPF is pretty good about DPI things. However, note the pictures. They are exactly half the size of reality. Well, let me be more clear. They are exactly pixel-sized. They are the size they are, rather than scaled to 200%. This has caused me to upload either giant pics or too-small pics because WLW scales text at one size and images at another, within the same document!

Adobe everything. I am shocked at how bad Adobe stuff looks on a high-dpi display. Just flip a coin, chances are it's gonna be a mix of small and large. Here's Adobe Reader.

Here's the Flash installer.

Here's a great example - Dropbox.

Dropbox gets worse the deeper you get into the menus.

SQL Server Management Studio is a bad example.

Here's an easy fix, just add high-res arrow resources, or draw them with a vector.

Examples of Good High-DPI behavior

Visual Studio 2013 looks great. Fonts are well-sized, and while the icons aren't high-res (retina) they still look ok. All the Dialog boxes and menus work as they should.

Word 2013 and all of Office look great at High-DPI. They've got great icons, great fonts and generally are awesome.

Paint.NET 4.0 Alpha also looks great. There's some scaled icons, but the app is smart and there's pixel perfect editing.

GitHub for Windows looks awesome at High-DPI.

Do you have any examples of high-DPI frustration on the Desktop? Upload them to ImgUr.com and link to them in the comments!

Bitcoin, Litecoin, Namecoin, Terracoin, Peercoin, Novacoin, Freicoin, Feathercoin stats

Viewing all 9433 articles
Browse latest View live