Trump писал(а):Serg89
У AMD в любом случае уже fail, за срыв сроков.
У амд новый начальник, он может на месяцев 6 вообще запуск буля отложить при своем желании.
Добавлено через 2 минуты 38 секунд:http://forum.ixbt.com/topic.cgi?id=8:23407-100НА хоботе есть те кто видел как на одном форуме кто то из амд назвал то, что все снимки которые ходят в сети прошли фотошоп.
Добавлено через 11 минут 28 секунд:JF о производительности модуля
OK, you can all stop now.
The overhead for a module is pretty low. Remember that 2 threads on a module run at ~180% of a single thread. That means that both have ~10% hit for sharing (90% +90% = 180%).
Now, let's consider that running 2 threads on 2 seperate modules nets you somewhere between 190-195% throughput. These are my made up numbers. But depending on the application, typically, on a normal system, there is some overhead for things like dependent results. This means the overhead is ~2.5-5%. So, the delta between running on one module vs running on 2 modules is ~5-7.5%.
Seems pretty negligible.
Now, compare that with having modules shut down and getting more clock speed out of it.
Also, what if the 2 threads actually are sharing data? In the case of 2 modules, there are cache probes and wiat states while the cores search and retrieve cache data, vs. sharing a cache and having everything available.
This is soooooo different from hyperthreading, where you want to only put a single thread on a physical core. In this world, the shared world, there are benefits to filling up modules.
Now, more importantly, keep in mind that applications spawn threads based on activity. There are "heavy threads" and "light threads." There are "long threads" and "short threads." Don't think that there is some kind of orderly process and that you can outsmart the OS. The system will look for open resources and apply them.