Wednesday, January 11, 2012
Memory4Less Exclusive Deals! Get special discount on Memory, Hard Drives, Solid State Drives, CPUs...
Memory4Less offers the huge discounts on Memory, Hard Drives, Processors, Solid State Drives, Network Accessories. Get more hot deals on Power Supplies, Printer Accessories, Graphic Cards
Labels: Memory4less Deals
Tuesday, September 14, 2010
You will be ready for the expected October move of Yahoo! Search ad serving to adCenter.
Friday, May 07, 2010
Thursday, March 04, 2010
Tuesday, February 23, 2010
Memory4Less - Lenovo/IBM Corporation Laptops: Lenovo/IBM Corporation ThinkPad A31 2652-C5U System Specifications and Available Memory Upgrade
Wednesday, September 17, 2008
DDR3 SDRAM or double-data-rate three synchronous dynamic random access memory is a random access memory technology used for high speed storage of the working data of a computer or other digital electronic device.
The primary benefit of DDR3 is the ability to transfer I/O data at eight times the speed of the memory cells it contains, thus enabling faster bus speeds and higher peak throughput than earlier memory technologies. However, there is no corresponding reduction in latency, which is therefore proportionally higher. In addition, the DDR3 standard allows for chip capacities of 512 megabits to 8 gigabits, effectively enabling a maximum memory module size of 16 gigabytes.
DDR3 memory promises a power consumption reduction of 30% compared to current commercial DDR2 modules due to DDR3's 1.5 V supply voltage, compared to DDR2's 1.8 V or DDR's 2.5 V. The 1.5 V supply voltage works well with the 90 nanometer fabrication technology used for most DDR3 chips. Some manufacturers further propose using "dual-gate" transistors to reduce leakage of current.
According to JEDEC the maximum recommended voltage is 1.575 volts and should be considered the absolute maximum when memory stability is the foremost consideration, such as in servers or other mission critical devices. In addition, JEDEC states that memory modules must withstand up to 1.975 volts before incurring permanent damage, although they are not required to function correctly at that level.
The main benefit of DDR3 comes from the higher bandwidth made possible by DDR3's 8 bit deep prefetch buffer, in contrast to DDR2's 4 bit prefetch buffer or DDR's 2 bit buffer.
DDR3 modules can transfer data at the effective clock rate of 800–1600 MHz using both rising and falling edges of a 400–800 MHz I/O clock. In comparision, DDR2's current range of effective data transfer rate is 400–800 MHz using a 200–400 MHz I/O clock, and DDR's range is 200–400 MHz based on a 100–200 MHz I/O clock. To date, the graphics card market has been the driver of such bandwidth requirements, where fast data transfer between framebuffers is required.
DDR3 prototypes were announced in early 2005. Products in the form of motherboards are appearing on the market as of mid-2007 based on Intel's P35 "Bearlake" chipset and memory DIMMs at speeds up to DDR3-1600 (PC3-12800). AMD's roadmap indicates their own adoption of DDR3 in 2008.
The typical latency for a DDR2 JEDEC standard was 5-5-5-15. The JEDEC standard latencies for the newer DDR3 memory are 7-7-7-15. One thing to be aware of, however, is that while these are the standards, manufacturing processes tend to improve with time. Eventually, DDR3 modules will likely be able to run at lower latencies than the JEDEC specifications. It is possible to find DDR2 memory that is faster than the standard 5-5-5-15 speeds, but it will take time for DDR3 to fall below the JEDEC latencies.
DDR3 latencies are numerically higher because the clock cycles by which they are measured are shorter; the actual time interval is generally equal to or lower than DDR2 latencies.
Labels: DDR3 SDRAM
Thursday, August 21, 2008
The X-18 M and X-25 M will be available over the next 30 days. The "M" stands for "mainstream," and the drives are for use in PCs, notebooks and other consumer computing applications. They come in 80 GB and 160 GB capacities, and 1.8-inch and 2.5-inch form factors, with a 3 Gbps SATA II interface. Read performance is 250 MBps, while sequential writes are about 70 MBps.
The X-25 E, or "Extreme," is set to follow in 90 days with smaller capacity – 32 GB and 64 GB models – and a sequential write performance that has been clocked in internal tests at 170 MBps. Also, 4 KB random-read IOPS have been measured at 35,000 per drive, and random writes at 3,300. According to Intel, this boost in performance comes from having parallel access through 10 I/O channels to each die in the enterprise version of the drive.
While these drives, like other SSDs on the market, have yet to overcome the disparity between read and write performance on NAND flash, Intel has architected the drives to be more reliable than other SSDs.
One of the major issues with SSDs today, along with cost and write performance, is durability, which is a two-fold problem. First, SSDs are subject to a phenomenon known as "write amplification," because memory cells must first be erased completely in 1 MB chunks before new bits of data can be written. Most SSDs transfer the contents of the block to be erased to DRAM for new bits to be added, and erase the contents of the cell before transferring the newly reconstituted cell back to the NAND die it came from.
Knut Grimsrud, Intel Fellow, director of storage architecture, said this process can involve up to 32 I/O operations within the SSD over the original write from the application. Because NAND has a finite number of write/erase cycles, this can shorten the life of the medium. For the X-25 E, Intel claims each write I/O uses just 1.1 times the number of writes within the SSD.
Intel won't divulge how that process works, claiming that such information is proprietary. Similarly, Intel also said its SSDs have special algorithmic efficiencies for wear leveling among the individual NAND dies within the SSD.
Intel measures wear-leveling efficiency by comparing the total number of cycles put on the most-used block with the total number of cycles put on an average block, Grimsud said. This comparison is expressed as a ratio. He said this ratio can be 3:1 in some SSDs. With the X-25 E, Intel claims that internal testing shows the most-cycled block with 10% more cycles than average, or a ratio of 1.1:1. "In terms of cycling endurance, a factor of three differences in the wear-leveling performance also has material impact on overall reliability," Grimsrud said.
Intel has not yet set pricing for the drives or identified OEM partners who will use its drives, but a video within an Intel press kit showed officials from Hewlett-Packard and Lenovo saying the mainstream drives will be put in their laptops. The video also included comment from John Fowler, Sun Microsystems' executive vice president of systems, who said Intel will join Samsung in supplying Sun its drives for upcoming hybrid storage devices.
IDC analyst Jeff Janukowicz said while it's unclear what exactly the mechanisms are behind Intel's longevity and reliability claims, "This is how we're going to start talking about SSDs going forward – we're going to be looking for vendors to differentiate according to these features."
Meanwhile, Intel's brand recognition will give the whole market a boost by definition, Janukowicz said. "Their brand brings a lot of validation to the SSD market," he said. "They're also bringing knowledge of PC and enterprise controller workloads, which are important to architecting SSDs."