Wednesday, September 25, 2013

A Few Thoughts on Cryptographic Engineering: On the NSA

A Few Thoughts on Cryptographic Engineering: On the NSA: Let me tell you the story of my tiny brush with the biggest crypto story of the year . A few weeks ago I received a call from a reporter a...

Friday, September 13, 2013

[Linux] Learning Linux for embedded systems

Ref: http://www.embedded.com/electronics-blogs/open-mike/4420567/Learning-Linux-for-embedded-systems

Learning Linux for embedded systems

I was recently asked how a person with experience in embedded systems programming with 8-bit processors, such as PIC, as well as 32-bit processors, such as PowerPC, but no Linux experience, can learn how to use Embedded Linux. 

What I always recommend to such an embedded systems programmer is this: Look at Embedded Linux as two parts, the embedded part and the Linux part. Let's consider the Linux part first.

The Linux side
Operating systems abound and the choices are many for an embedded system, both proprietary and open source. Linux is one of these choices. No matter what you use for your development host, whether Linux or Windows or Mac, you need to learn how to program using the target OS. In this respect, using Embedded Linux is not greatly different from using VXworks, WindowCE, or another OS. You need an understanding of how the OS is designed, how to configure the OS, and how to program using its application programming interface (API).

A few factors make learning how to program Linux easier than other embedded OSes. You'll find many books and tutorials about Linux, as well as Unix from which it is derived -- many more than for other OSes. Online resources for Linux are ample, while other OSes have a much smaller presence, or one driven by the OS manufacturer. Linux is open source, and you can read the code to get an understanding of exactly what the OS is doing, something that is often impossible with a proprietary OS distributed as binaries. (I certainly do not recommend reading Linux source to try to learn how to program Linux. That's like trying to learn to drive by studying how a car's transmission works.)

The most significant factor that sets Linux apart from other OSes is that the same kernel is used for all systems, from the smallest embedded boards, to desktop systems, to large server farms. This means that you can learn a large amount of Linux programming on your desktop in an environment, which is much more flexible than using a target board with all of the complexities of connecting to the target, downloading a test programming, and running the test. All of the basic concepts and most APIs are the same for your desktop Linux and your Embedded Linux.

Installing Linux
You could install a desktop Linux distribution on your development system, replacing your Windows or Mac system, but that may be a pretty large piece to bite off at one time, since you would likely need to configure email, learn new tools, and come up to speed with a different desktop interface. You could install Linux in a dual-boot environment, where you use the old environment for email, etc., and use the Linux system for learning. This can be pretty awkward, since you need to shut down one environment to bring up the other. Additionally, doing either within a corporate environment may be impractical or impossible. IT folks prefer supporting a known environment, not one that you have chosen.

An easier way is to create a virtual machine environment on your current development system. For Windows hosts, you can install VMware Player or VirtualBox, and on the Mac, you can install Parallels or VMware Fusion. Using a VM offers you much more flexibility. You can install a desktop Linux distribution, like Ubuntu or Fedora. You can use this distribution to become familiar with basic Linux concepts, learn the command shell and learn how to build and run programs. You can reconfigure the kernel or load drivers, without the concern that you'll crash your desktop system. You can build the entire kernel and application environment, similar to what you might do with a cross-development environment for an Embedded Linux target.

If your VM running Linux crashes, you simply restart the VM. The crash doesn't affect other things which you might be doing on your development system, such as reading a web page on how to build and install a driver, or that writing an email to one of the many support mailing lists.

Some of the VM products have snapshot features that allow you to take a checkpoint of a known working configuration, to which you can roll back if you can't correct a crash easily. This snapshot is far easier than trying to rescue a crashing desktop system or an unresponsive target board.

A Linux VM running on your desktop is not a perfect model for an Embedded Linux environment. The VM emulates the hardware of a desktop system, with a limited set of devices that are unlikely to match a real embedded target. But our objective at this point is not modeling a real target (something we'll discuss later) but creating an environment were you can learn Linux concepts and programming easily.

This is the first step: Create a VM and install a desktop Linux distribution on the VM. We'll pick from here in our next installment.

Michael Eager is principal consultant at Eager Consulting in Palo Alto, Calif. He has over four decades experience developing compilers, debuggers, and simulators for a wide range of processor architectures used in embedded systems. His current and former clients include major semiconductor companies and systems developers. Michael has been a member of the ISO C++ Standard Committee and ABI Committees for several processor architectures. He is chair of the Debugging Standards Committee for DWARF, a widely used debug data format. He is active in the open-source and Linux communities.

Tuesday, September 10, 2013

Trusted Execution Technology (aka TXT): What is it?

Ref: http://communities.intel.com/community/vproexpert/blog/2011/01/25/trusted-execution-technology-aka-txt-what-is-it

In 2007, Intel introduced a new security feature on the business desktop line called Trusted Execution Technology (TXT). TXT was added to Intel vPro notebooks in 2008 and to the server platform in 2010. TXT is the foundation of a new generation of safe computers.

Many of the most sophisticated attacks against PC equipment nowadays aim to infect the user’s machines for different ends -- sending spam, DDoS attacks, information robbery. It has been a big challenge to mitigate them on the software layer.

As a hardware manufacturer, it’s our responsibility to join this battle and help the software industry develop more robust security solutions. However, Intel’s initiative isn’t the first one. Who remembers the ring hierarchy introduced on the 286 that allowed creating an operating system with privilege isolation? Or Execution Disable Bit, that helped prevent malware propagation on the machine, marking appropriated memory areas for code execution? We don’t treat security on just one layer and treating it in depth is not enough (e.g. software, hardware, and process). We must always be ahead of security issues, because it’s a race between who needs to be protected and who wants to attack.

Trusted Execution Technology (TXT) comes as a reinforcement to deal with threats that act on the same level of the kernel operating system or even more privileged levels -- like hypervisor’s malware, where the malicious code can take advantage of the CPU virtualization instructions to emulate hardware instructions and completely control the operating system.

How does it work?

Before we explain TXT, there is some groundwork to be done. First let’s understand how a key component in this technology works: the Trusted Platform Module, which is the root component of a secure platform. It’s a passive I/O device that is usually located at the LPC bus, and nowadays can be found as part of the North Bridge chipset. TPM has special registers, called PCR registers (i.e. PCR[0…23]) and can do some interesting things: Seal/Unseal secrets, allow Quoting (Remote Attestation) and do some crypto services, e.g. RSA, PRNG, etc.

The principle of TPM is that it is based on PCR extend operations, where it uses the previous PCR value to define the next one:

TPM-ExtendEquation.PNG
A single PCR can be extended multiple times and it’s computationally infeasible to define a specified value to a PCR, so the order where things happen matter [(ext(A),ext(B)) ≠ (ext(B),ext(A))] and the secret sealed in TPM can only be unsealed if the correct PCR values matches as presented in figure 1.

TPM-Sealing and Unsealing Operation.png
Figure 1 – Sealing/Unsealing TPM operation due PCR registers matching.

TPM is used also by Microsoft BitLocker, a full disk encryption technology, where the key to decrypt the disk is located in the TPM chip and the retrieving #_msocom_1 of this key depends on the integrity of the code that can be executed in memory, since the bootstrap and this process is known as Static Root Trust of Measurement (aka. SRTM) as presented in figure 2.

SRTM-DescriptionFlow.png
Figure 2 – Static Root Trust of Measurement

SRTM produces excellent results and a great level of security -- mainly against offline attacks --but the problem is that multiple components must be verified in the chain of trust once TPM is initialized. Verifying the integrity of each component in the path of computer initialization, as presented in figure 2, can become hard to manage due to the number of components involved. We need to measure every possible piece of code that might have been executed since the system boot; this imposes scalability issues.

Therefore, to cope with this limitation, TXT uses a different approach, named DRTM (Dynamic Root Trust of Measurement). Instead of validating every single piece of code, there is a magic new instruction called SENTER that has the capability to attest the integrity of the hypervisor loader or OS kernel code in a process known as Measure Launch. As presented in figure 3, the hypervisor loader issues the GETSEC[SENTER] instruction, which essentially performs a soft processor reset and loads a signedauthenticated code module (ACM), which can only be executed if it has a valid digital signature. This module verifies system configurations and BIOS elements by comparing against the “known good” values protected of sensitive memory areas by using Intel Virtualization Technology for Directed I/O (Intel VT-d) and chipset specific technologies, such as Intel Extended Page Tables (Intel EPT). Then it verifies and launches the host system (a hypervisor core or an OS kernel code), which configures low-level systems and protects itself using hardware assisted paging (HAP).

DRTM-DescriptionFlow.png
Figure 3 – Dynamic Root Trust of Measurement


TXT is the right technology for Measure Launch and, in conjunction with Intel Virtualization Technology (VT-x, VT-d and EPT), it’s also possible to implement run-time protection against malicious code.