Saturday, November 23, 2013

Saturday, November 2, 2013

Google Taking Aim at Device Modders in Android 4.4 KitKat

Ref: http://www.xda-developers.com/android/google-taking-aim-at-device-modders-in-android-4-4-kitkat/

Google Taking Aim at Device Modders in Android 4.4 KitKat
POSTED NOVEMBER 1, 2013 AT 6:30 PM BY PULSER_G2
Google Taking Aim at Device Modders in Android 4.4 KitKat
Android 4.4 introduces a number of changes intended to reduce the risks of rootkits on the platform. In addition to SELinux, the dm-verity kernel feature is also used on boot. The dm-verity feature is used to verify the filesystem storage, and detect modifications to the device at block level (rather than file level). In essence, dm-verity aims to prevent root software from modifying the device file system. This is done by detecting the modifications made to the filesystem, which will no longer match the expected configuration.
In dm-verity, each block of the storage device has a SHA-256 hash associated with it. (For reference, a block is simply a unit of address for storage, typically around 4 KB on flash devices.) A tree of hashes is formed across pages, such that only the “top” hash in the tree (known as the root hash) needs to be trusted, in order for the entire filesystem to be trusted. If any block is modified, this will change the hash, breaking the chain.
The boot partition of the device will contain a public key, which the OEM is expected to externally verify (perhaps via the bootloader or low-level CPU features). This public key is used to ensure the signature of the hash on the file system is valid and unmodified.
In order to reduce the time taken to verify the filesystem, blocks are only verified when they are accessed, and are verified in parallel with the regular read operation (to essentially eliminate any latency with accessing the storage). If the verification changes (i.e. files have changed on the system partition), then a read error is generated. Depending on the application accessing the data, it may proceed if it’s not a critical action, but it is also possible for applications to decline to operate under these conditions.
While nobody can predict the future with 100% accuracy, I think it’s fair to say that “rooting” and modifying devices running Android 4.4 with locked bootloaders (i.e. where root exploits are required, as the OEM will not permit custom kernels) may well be considerably more difficult than in previous Android versions. It seems that Android 4.4 is taking a few leaves out of the Chrome OS book, as these changes essentially implement “verified boot,” as found on Chrome OS.
To re-iterate, if you are able to change the kernel your device uses, this feature will not be a concern. It’s possible to either disable dm-verity in the kernel, or to set it up to use your own keys to authenticate the system hash. For users who choose to buy carrier-branded devices and accept a locked bootloader, but find a way to root the device, take heed of this warning. It’s not at all unlikely (in my technical opinion) for this to become incredibly unlikely to happen on future devices. If you want the ability to modify the software on your phone, I’d avoid anything with a locked bootloader, and ensure you can modify the kernel (to disable or modify the dm-verity signatures).
Right now, little is known about what this will actually mean, but aside from greater security for users on stock ROMs, I suspect there will be some noticeable impact on casual users wishing to make small changes to Android. Until we see devices from other OEMs shipping with 4.4, it’s difficult to really assess how (or if) this will change things. But take note, and bear it in mind.

Sunday, October 20, 2013

What is the Casimir effect?

Ref: http://www.scientificamerican.com/article.cfm?id=what-is-the-casimir-effec

What is the Casimir effect?


Northeastern University experimental particle physicists Stephen Reucroft and John Swain put their heads together to write the following answer.
To understand the Casimir Effect, one first has to understand something about a vacuum in space as it is viewed in quantum field theory. Far from being empty, modern physics assumes that a vacuum is full of fluctuating electromagnetic waves that can never be completely eliminated, like an ocean with waves that are always present and can never be stopped. These waves come in all possible wavelengths, and their presence implies that empty space contains a certain amount of energy--an energy that we can't tap, but that is always there.
Now, if mirrors are placed facing each other in a vacuum, some of the waves will fit between them, bouncing back and forth, while others will not. As the two mirrors move closer to each other, the longer waves will no longer fit--the result being that the total amount of energy in the vacuum between the plates will be a bit less than the amount elsewhere in the vacuum. Thus, the mirrors will attract each other, just as two objects held together by a stretched spring will move together as the energy stored in the spring decreases.
Casimir
illustration
Image: Scientific American
CASIMIR EFFECT
This effect, that two mirrors in a vacuum will be attracted to each other, is the Casimir Effect. It was first predicted in 1948 by Dutch physicist Hendrick Casimir. Steve K. Lamoreaux, now at Los Alamos National Laboratory, initially measured the tiny force in 1996.
It is generally true that the amount of energy in a piece of vacuum can be altered by material around it, and the term "Casimir Effect" is also used in this broader context. If the mirrors move rapidly, some of the vacuum waves can become real waves. Julian Schwinger and many others have suggested that this "dynamical Casimir effect" may be responsible for the mysterious phenomenon known as sonoluminescence.
One of the most interesting aspects of vacuum energy (with or without mirrors) is that, calculated in quantum field theory, it is infinite! To some, this finding implies that the vacuum of space could be an enormous source of energy--called "zero point energy."
But the finding also raises a physical problem: there's nothing to stop arbitrarily small waves from fitting between two mirrors, and there is an infinite number of these wavelengths. The mathematical solution is to temporarily do the calculation for a finite number of waves for two different separations of the mirrors, find the associated difference in vacuum energies and then argue that the difference remains finite as one allows the number of wavelengths to go to infinity.
Although this trick works, and gives answers in agreement with experiment, the problem of an infinite vacuum energy is a serious one. Einstein's theory of gravitation implies that this energy must produce an infinite gravitational curvature of spacetime--something we most definitely do not observe. The resolution of this problem is still an open research question.

Basics of Intel Virtualization (VT-x)

Basics for starters in Intel Virtualization (VT-x)
Due to explicit prohibition, I can only share the original link and not reproduce it on this blog.
SO, it is just 1 click away :)

Ref: http://www.hardwaresecrets.com/article/Everything-You-Need-to-Know-About-the-Intel-Virtualization-Technology/263/1



Multiple Scientists Confirm The Reality of Free Energy – Here’s The Proof - See more at: http://www.collective-evolution.com/2013/10/11/multiple-scientists-confirm-the-reality-of-free-energy-heres-the-proof/#_

Ref: http://www.collective-evolution.com/2013/10/11/multiple-scientists-confirm-the-reality-of-free-energy-heres-the-proof/#_


Multiple Scientists Confirm The Reality of Free Energy – Here’s The Proof

free energy1Who is benefiting from suppressing scientific research? Whose power and wealth is threatened by access to clean and free energy? Who has the desire to create a system where so few have so much, and so many have so little?
It’s become extremely obvious, especially within the past few years, that Earth’s dependence on fossil fuels is not needed at all. Yet we continue to create war, destroy the environment and harm mother Earth so we can continue using the same old techniques that generate trillions of dollars for those at the top of the energy industry. Corporate media continues to push the idea that we are in an energy crisis, that we are approaching a severe problem due to a lack of resources.  It’s funny how the same group of shareholders that own the energy industry also own corporate media. This seems to be both another fear tactic and another excuse to create conflict. How can there be a lack of resources when we have systems that can provide energy without any external input? This means that these systems could run for infinity and provide energy to the entire planet without burning fossil fuels. This would eliminate a large portion of the ‘bills’ you pay to live, and reduce the harmful effect we are having on Earth and her environment. Even if you don’t believe in the concept of free energy (also known as zero-point energy), we have multiple clean energy sources that render the entire energy industry obsolete. This article however will focus mainly on the concept of free energy which has been proven time and time again by researchers all across the world who have conducted several experiments and published their work multiple times. A portion of this vast amount of research will be presented in this paper.
These concepts have been proven in hundreds of laboratories all over the world, yet never see the light of day. If the new energy technologies were set free world wide the change would be profound. It would affect everybody, it would be applicable everywhere. These technologies are absolutely the most important thing that have happened in the history of the world.   – Dr. Brian O’Leary, Former NASA Astronaut and Princeton Physics Professor.

The Research

These concepts are currently being discussed at The Breakthrough Energy Movement Conference.
The Casimir Effect is a proven example of free energy that cannot be debunked. The Casimir Effect illustrates zero point or vacuum state energy, which predicts that two metal plates close together attract each other due to an imbalance in the quantum fluctuations(0)(8). You can see a visual demonstration of this concept here. The implications of this are far reaching and have been written about extensively within theoretical physics by researchers all over the world. Today, we are beginning to see that these concepts are not just theoretical, but instead very practical and simply very suppressed.
Vacuums generally are thought to be voids, but Hendrik Casimir believed these pockets of nothing do indeed contain fluctuations of electromagnetic waves. He suggested that two metal plates held apart in a vacuum could trap the waves, creating vacuum energy that could attract or repel the plates. As the boundaries of a region move, the variation in vacuum energy (zero-point energy) leads to the Casimir effect. Recent research done at Harvard University, and Vrije University in Amsterdam and elsewhere has proved the Casimir effect correct (7).
A paper published in the Journal Foundations of Physics Letters, in August 2001, Volume 14, Issue 4 shows that the principles of general relativity can be used to explain the principles of the motionless electromagnetic generator (MEG)(1). This device takes electromagnetic energy from curved space-time and outputs about twenty times more energy than inputted. The fact that these machines exist is astonishing, it’s even more astonishing that these machines are not implemented worldwide right now. It would completely wipe out the entire energy industry, nobody would have to pay bills and it would eradicate poverty at an exponential rate. This paper demonstrates that electromagnetic energy can be extracted from the vacuum and used to power working devices such as the MEG used in the experiment. The paper goes on to emphasize how these devices are reproducible and repeatable.
The results of this research have been used by numerous scientists all over the world. One of the many examples is a paper written by Theodor C. Loder, III, Professor Emeritus at the Institute for the Study of Earth, Oceans and Space at the University of New Hampshire. He outlined the importance of these concepts in his paper titled Space and Terrestrial Transportation and Energy Technologies For The 21st Century (2).
There is significant evidence that scientists since Tesla have known about this energy, but that its existence and potential use has been discouraged and indeed suppressed over the past half century or more (2) – Dr. Theodor C. Loder III
Harold E. Puthoff, an American Physicist and Ph.D. from Stanford University, as a researcher at the institute for Advanced Studies at Austin, Texas published a paper in the journal Physical Review A, atomic, molecular and optical physics titled “Gravity as a zero-point-fluctuation force(3)” . His paper proposed a suggestive model in which gravity is not a separately existing fundamental force, but is rather an induced effect associated with zero-point fluctuations of the vacuum, as illustrated by the Casimir force. This is the same professor that had close connections with Department of Defense initiated research in regards to remote viewing. The findings of this research are highly classified, and the program was instantly shut down not longer after its initiation (4).
Another astonishing paper titled “Extracting energy and heat from the vacuum,” by the same researchers, this time in conjunction with Daniel C. Cole, Ph.D. and Associate Professor at Boston University in the Department of Mechanical Engineering was published in the same journal (5).
Relatively recent proposals have been made in the literature for extracting energy and heat from electromagnetic zero-point radiation via the use of the Casimir force. The basic thermodynamics involved in these proposals is analyzed and clarified here, with the conclusion that yes, in principle, these proposals are correct (5).
Furthermore, a paper in the journal Physical Review A, Puthoff  titled “Source of vacuum electromagnetic zero-point energy (6),” Puthoff describes how nature provides us with two alternatives for the origin of electromagnetic zero-point energy. One of them is generation by the quantum fluctuation motion of charged particles that constitute matter. His research shows that particle motion generates the zero-point energy spectrum, in the form of a self-regenerating cosmological feedback cycle.
Before commenting on the article, please read the article, look at the sources and watch the video. Many of your questions can be answered there. We come across many who are quick to comment without examining the information presented. This is a clip from the documentary Thrive, you can view the full documentary by clicking on the title. 
We’ve had major military people at great risks to themselves say yes these things are real. Why do you think the military industrial complex doesn’t want that statement to be made, because you start thinking about what kind of technology is behind that, that’s the bottom line.  – Adam Trombly, Physicist, Inventor
As illustrated multiple times above, the energy these systems use is extracted from the fabric of the space around us. That means it cannot be metered, which creates a threat to the largest industry on the planet, energy. An industry that is partly responsible for the destruction of our planet, and an industry that rakes in hundreds of trillions of dollars every year. No blame is to be given, only a realization is to be made that we have the power to change this anytime we choose. These technologies would completely change everything, but it’s important to remember that operating technology depends on what level of consciousness the operators are operating it at. Is the human race ready for such a transformation? Nothing can work unless the consciousness behind it comes from a place of love, peace, co-operation and understanding. The desire for the benefit of all beings on the planet would be the driving force for the release of these technologies.
These technologies are locked up in black budget projects, it would take an act of God to ever get them out to benifit humanity (2) – Ben Rich, Former Director of Lockheed’s Skunkworks Division
I hope I’ve provided enough information here for those interested in furthering their research on the subject. There is a lot to this technology, and it branches into many other areas from ancient history to sacred geometry and all the way to UFOs. The technology described in this paper is similar to what Dr. O’Leary states here with regards to propulsion systems and an isolated field of energy.  For more on this subject, please visit our exopolitics section under the alternative news tab as it does correlate with the technology of anti-gravity and free energy.
Collective Evolution has covered this topic before. We’ve demonstrated the reality of the Searl Effect Generator.
We’ve also written about the Free Energy Devices.
This article was simply to provide you with more information and research to show you just how applicable these concepts are and the tremendous implications they can have.
Sources:
- See more at: http://www.collective-evolution.com/2013/10/11/multiple-scientists-confirm-the-reality-of-free-energy-heres-the-proof/#_

Wednesday, September 25, 2013

A Few Thoughts on Cryptographic Engineering: On the NSA

A Few Thoughts on Cryptographic Engineering: On the NSA: Let me tell you the story of my tiny brush with the biggest crypto story of the year . A few weeks ago I received a call from a reporter a...

Friday, September 13, 2013

[Linux] Learning Linux for embedded systems

Ref: http://www.embedded.com/electronics-blogs/open-mike/4420567/Learning-Linux-for-embedded-systems

Learning Linux for embedded systems

I was recently asked how a person with experience in embedded systems programming with 8-bit processors, such as PIC, as well as 32-bit processors, such as PowerPC, but no Linux experience, can learn how to use Embedded Linux. 

What I always recommend to such an embedded systems programmer is this: Look at Embedded Linux as two parts, the embedded part and the Linux part. Let's consider the Linux part first.

The Linux side
Operating systems abound and the choices are many for an embedded system, both proprietary and open source. Linux is one of these choices. No matter what you use for your development host, whether Linux or Windows or Mac, you need to learn how to program using the target OS. In this respect, using Embedded Linux is not greatly different from using VXworks, WindowCE, or another OS. You need an understanding of how the OS is designed, how to configure the OS, and how to program using its application programming interface (API).

A few factors make learning how to program Linux easier than other embedded OSes. You'll find many books and tutorials about Linux, as well as Unix from which it is derived -- many more than for other OSes. Online resources for Linux are ample, while other OSes have a much smaller presence, or one driven by the OS manufacturer. Linux is open source, and you can read the code to get an understanding of exactly what the OS is doing, something that is often impossible with a proprietary OS distributed as binaries. (I certainly do not recommend reading Linux source to try to learn how to program Linux. That's like trying to learn to drive by studying how a car's transmission works.)

The most significant factor that sets Linux apart from other OSes is that the same kernel is used for all systems, from the smallest embedded boards, to desktop systems, to large server farms. This means that you can learn a large amount of Linux programming on your desktop in an environment, which is much more flexible than using a target board with all of the complexities of connecting to the target, downloading a test programming, and running the test. All of the basic concepts and most APIs are the same for your desktop Linux and your Embedded Linux.

Installing Linux
You could install a desktop Linux distribution on your development system, replacing your Windows or Mac system, but that may be a pretty large piece to bite off at one time, since you would likely need to configure email, learn new tools, and come up to speed with a different desktop interface. You could install Linux in a dual-boot environment, where you use the old environment for email, etc., and use the Linux system for learning. This can be pretty awkward, since you need to shut down one environment to bring up the other. Additionally, doing either within a corporate environment may be impractical or impossible. IT folks prefer supporting a known environment, not one that you have chosen.

An easier way is to create a virtual machine environment on your current development system. For Windows hosts, you can install VMware Player or VirtualBox, and on the Mac, you can install Parallels or VMware Fusion. Using a VM offers you much more flexibility. You can install a desktop Linux distribution, like Ubuntu or Fedora. You can use this distribution to become familiar with basic Linux concepts, learn the command shell and learn how to build and run programs. You can reconfigure the kernel or load drivers, without the concern that you'll crash your desktop system. You can build the entire kernel and application environment, similar to what you might do with a cross-development environment for an Embedded Linux target.

If your VM running Linux crashes, you simply restart the VM. The crash doesn't affect other things which you might be doing on your development system, such as reading a web page on how to build and install a driver, or that writing an email to one of the many support mailing lists.

Some of the VM products have snapshot features that allow you to take a checkpoint of a known working configuration, to which you can roll back if you can't correct a crash easily. This snapshot is far easier than trying to rescue a crashing desktop system or an unresponsive target board.

A Linux VM running on your desktop is not a perfect model for an Embedded Linux environment. The VM emulates the hardware of a desktop system, with a limited set of devices that are unlikely to match a real embedded target. But our objective at this point is not modeling a real target (something we'll discuss later) but creating an environment were you can learn Linux concepts and programming easily.

This is the first step: Create a VM and install a desktop Linux distribution on the VM. We'll pick from here in our next installment.

Michael Eager is principal consultant at Eager Consulting in Palo Alto, Calif. He has over four decades experience developing compilers, debuggers, and simulators for a wide range of processor architectures used in embedded systems. His current and former clients include major semiconductor companies and systems developers. Michael has been a member of the ISO C++ Standard Committee and ABI Committees for several processor architectures. He is chair of the Debugging Standards Committee for DWARF, a widely used debug data format. He is active in the open-source and Linux communities.

Tuesday, September 10, 2013

Trusted Execution Technology (aka TXT): What is it?

Ref: http://communities.intel.com/community/vproexpert/blog/2011/01/25/trusted-execution-technology-aka-txt-what-is-it

In 2007, Intel introduced a new security feature on the business desktop line called Trusted Execution Technology (TXT). TXT was added to Intel vPro notebooks in 2008 and to the server platform in 2010. TXT is the foundation of a new generation of safe computers.

Many of the most sophisticated attacks against PC equipment nowadays aim to infect the user’s machines for different ends -- sending spam, DDoS attacks, information robbery. It has been a big challenge to mitigate them on the software layer.

As a hardware manufacturer, it’s our responsibility to join this battle and help the software industry develop more robust security solutions. However, Intel’s initiative isn’t the first one. Who remembers the ring hierarchy introduced on the 286 that allowed creating an operating system with privilege isolation? Or Execution Disable Bit, that helped prevent malware propagation on the machine, marking appropriated memory areas for code execution? We don’t treat security on just one layer and treating it in depth is not enough (e.g. software, hardware, and process). We must always be ahead of security issues, because it’s a race between who needs to be protected and who wants to attack.

Trusted Execution Technology (TXT) comes as a reinforcement to deal with threats that act on the same level of the kernel operating system or even more privileged levels -- like hypervisor’s malware, where the malicious code can take advantage of the CPU virtualization instructions to emulate hardware instructions and completely control the operating system.

How does it work?

Before we explain TXT, there is some groundwork to be done. First let’s understand how a key component in this technology works: the Trusted Platform Module, which is the root component of a secure platform. It’s a passive I/O device that is usually located at the LPC bus, and nowadays can be found as part of the North Bridge chipset. TPM has special registers, called PCR registers (i.e. PCR[0…23]) and can do some interesting things: Seal/Unseal secrets, allow Quoting (Remote Attestation) and do some crypto services, e.g. RSA, PRNG, etc.

The principle of TPM is that it is based on PCR extend operations, where it uses the previous PCR value to define the next one:

TPM-ExtendEquation.PNG
A single PCR can be extended multiple times and it’s computationally infeasible to define a specified value to a PCR, so the order where things happen matter [(ext(A),ext(B)) ≠ (ext(B),ext(A))] and the secret sealed in TPM can only be unsealed if the correct PCR values matches as presented in figure 1.

TPM-Sealing and Unsealing Operation.png
Figure 1 – Sealing/Unsealing TPM operation due PCR registers matching.

TPM is used also by Microsoft BitLocker, a full disk encryption technology, where the key to decrypt the disk is located in the TPM chip and the retrieving #_msocom_1 of this key depends on the integrity of the code that can be executed in memory, since the bootstrap and this process is known as Static Root Trust of Measurement (aka. SRTM) as presented in figure 2.

SRTM-DescriptionFlow.png
Figure 2 – Static Root Trust of Measurement

SRTM produces excellent results and a great level of security -- mainly against offline attacks --but the problem is that multiple components must be verified in the chain of trust once TPM is initialized. Verifying the integrity of each component in the path of computer initialization, as presented in figure 2, can become hard to manage due to the number of components involved. We need to measure every possible piece of code that might have been executed since the system boot; this imposes scalability issues.

Therefore, to cope with this limitation, TXT uses a different approach, named DRTM (Dynamic Root Trust of Measurement). Instead of validating every single piece of code, there is a magic new instruction called SENTER that has the capability to attest the integrity of the hypervisor loader or OS kernel code in a process known as Measure Launch. As presented in figure 3, the hypervisor loader issues the GETSEC[SENTER] instruction, which essentially performs a soft processor reset and loads a signedauthenticated code module (ACM), which can only be executed if it has a valid digital signature. This module verifies system configurations and BIOS elements by comparing against the “known good” values protected of sensitive memory areas by using Intel Virtualization Technology for Directed I/O (Intel VT-d) and chipset specific technologies, such as Intel Extended Page Tables (Intel EPT). Then it verifies and launches the host system (a hypervisor core or an OS kernel code), which configures low-level systems and protects itself using hardware assisted paging (HAP).

DRTM-DescriptionFlow.png
Figure 3 – Dynamic Root Trust of Measurement


TXT is the right technology for Measure Launch and, in conjunction with Intel Virtualization Technology (VT-x, VT-d and EPT), it’s also possible to implement run-time protection against malicious code.



Thursday, August 15, 2013

Execution address built-in functions for use in scatter files

Source: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0493c/CHDJDHFI.html

Execution address built-in functions for use in scatter files

The execution address related functions can only be used when specifying a base_address+offsetvalue, or max_size. They map to combinations of the linker defined symbols shown in Table 4.
Table 4. Execution address related functions 
FunctionLinker defined symbol value
ImageBase(region_name)
Image$$region_name$$Base
ImageLength(region_name)
Image$$region_name$$Length + Image$$region_name$$ZI$$Length
ImageLimit(region_name)
Image$$region_name$$Base + Image$$region_name$$Length + Image$$region_name$$ZI$$Length

The parameter region_name  can be either a load or an execution region name. Forward references are not permitted. The region_name can only refer to load or execution regions that have already been defined.

Note

You cannot use these functions when using the .ANY selector pattern. This is because a .ANYregion uses the maximum size when assigning sections. The maximum size might not be available at that point, because the size of all regions is not known until after the .ANY assignment.
The following example shows how to use ImageLimit(region_name) to place one execution region immediately after another:
Example 10. Placing an execution region after another
LR1 0x8000
{
    ER1 0x100000
    {
        *(+RO)
    }
}
LR2 0x100000
{
    ER2 (ImageLimit(ER1))               ; Place ER2 after ER1 has finished
    {
        *(+RW +ZI)
    }
}

Show/hideUsing +offset with expressions

+offset value for an execution region is defined in terms of the previous region. You can use this as an input to other expressions such as AlignExpr. For example:
LR1 0x4000
{
    ER1 AlignExpr(+0, 0x8000)
    {
        ...
    }
}
By using AlignExpr, the result of +0 is aligned to a 0x8000 boundary. This creates an execution region with a load address of 0x4000 but an execution address of 0x8000.

Thursday, June 6, 2013

Intel working on low-power Thunderbolt for tablets, smartphones


Intel working on low-power Thunderbolt for tablets, smartphones

Intel working on low-power Thunderbolt for tablets, smartphones

Intel says Thunderbolt on mobile devices depends on the future of WiGig, a wireless data transfer specification

By , IDG News Service
June 04, 2013 10:34 AM ET
IDG News Service - A low-power Thunderbolt interconnect for smartphones and tablets is in the works, but the wired technology may not thrive if consumers prefer products using the wireless WiGig specification for data transfers.
There is a need for faster throughput so smartphones and tablets can connect to high-definition TVs and storage peripherals, said Dadi Perlmutter, executive vice president and general manager of the Intel Architecture Group, in an interview on the sidelines of the Computex trade show in Taipei.
The Thunderbolt data transfer technology shuttles data at high speeds between host computers and peripherals. Intel's mobile Thunderbolt interconnect will be a low-power version of its more power-hungry relative used in Macs and PCs, Perlmutter said. He did not provide a time frame on when the technology would be ready.
Apple was an early adopter of Thunderbolt, and if introduced, low-power Thunderbolt could be a candidate for use in iPhones and iPads. The mobile devices currently use the proprietary Lightning interconnect for charging and connecting to peripherals.
But in the end, adoption of Thunderbolt in smartphones and tablets depends on users, who may prefer wireless data transmission on mobile devices, Perlmutter said.
Intel is backing the WiGig specification, which can transfer data wirelessly at a rate of up to 7Gbps (bits per second), which is faster than standard Wi-Fi. WiGig operates over the 60GHz spectrum, and is intended for use over short distances or within a room. The Wireless Gigabit Alliance leads the development of the WiGig specification, and devices supporting the standard could go on sale next year. The Wi-Fi Alliance is eventually expected to take over WiGig development.
"Do users want Thunderbolt or do they want WiGig? They might want both. We are working on both," Perlmutter said.
"We'll see what's winning," Perlmutter said.
Most non-Apple smartphones and tablets today use the micro-USB ports to connect to peripherals, with USB 3.0 just reaching devices. Asustek introduced the Transformer Pad Infinity with a USB 3.0 port at Computex.
Also, adoption of Thunderbolt on desktops and laptops has been poor due to the dominance of USB 3.0, which is slower but ubiquitous. Thunderbolt peripherals and cables are also expensive relative to USB 3.0.
Intel in April doubled the speed of the Thunderbolt interconnect -- which supports the PCI-Express and DisplayPort protocols -- to 20Gbps. Standards-setting organizations PCI-Special Interest Group (PCI-SIG) and MIPI Alliance in February started work on a new M-PCIe (mobile PCI-Express) specification, which is due to be finalized this quarter.