[PRL] Apple's new OS geared for multicore future | Deep Tech - CNET News

Vasileios Koutavas vkoutav at ccs.neu.edu
Tue Sep 1 06:38:35 EDT 2009


I just came across this more detailed explanation of GCD from Ars  
Technica that even has code samples:

http://arstechnica.com/apple/reviews/2009/08/mac-os-x-10-6.ars/10

The relevant pages are 10 (blocks), 11 (multithreading-is-hard), and  
12-13 (GCD).

>  Blocks add closures and anonymous functions to C and the C-derived  
> languages C++, Objective-C, and Objective C++.

Blocks simplify the packaging of computation that is sent to queues:

> [...] A queue in GCD is just what it sounds like. Tasks are  
> enqueued, and then dequeued in FIFO order. [...] Dequeuing the task  
> means handing it off to a thread where it will execute and do its  
> actual work.

> [...]

> GCD maintains a global pool of threads which it hands out to queues  
> as they're needed. When a queue has no more pending tasks to run on  
> a thread, the thread goes back into the pool.

> This is an extremely important aspect of GCD's design. Perhaps  
> surprisingly, one of the most difficult parts of extracting maximum  
> performance using traditional, manually managed threads is figuring  
> out exactly how many threads to create. Too few, and you risk  
> leaving hardware idle. Too many, and you start to spend a  
> significant amount of time simply shuffling threads in and out of  
> the available processor cores.
>
> Let's say a program has a problem that can be split into eight  
> separate, independent units of work. If this program then creates  
> four threads on an eight-core machine, is this an example of  
> creating too many or too few threads? Trick question! The answer is  
> that it depends on what else is happening on the system.
>
> If six of the eight cores are totally saturated doing some other  
> work, then creating four threads will just require the OS to waste  
> time rotating those four threads through the two available cores.  
> But wait, what if the process that was saturating those six cores  
> finishes? Now there are eight available cores but only four threads,  
> leaving half the cores idle.
>
> With the exception of programs that can reasonably expect to have  
> the entire machine to themselves when they run, there's no way for a  
> programmer to know ahead of time exactly how many threads he should  
> create. Of the available cores on a particular machine, how many are  
> in use? If more become available, how will my program know?
>
> The bottom line is that the optimal number of threads to put in  
> flight at any given time is best determined by a single, globally  
> aware entity. In Snow Leopard, that entity is GCD. It will keep zero  
> threads in its pool if there are no queues that have tasks to run.  
> As tasks are dequeued, GCD will create and dole out threads in a way  
> that optimizes the use of the available hardware. GCD knows how many  
> cores the system has, and it knows how many threads are currently  
> executing tasks. When a queue no longer needs a thread, it's  
> returned to the pool where GCD can hand it out to another queue that  
> has a task ready to be dequeued.

What I didn't see in the article is how (or whether) GCD deals with  
shared state and locking.

--Vassilis


On 31 Aug 2009, at 16:54, Mitchell Wand wrote:

> Snow Leopard's approach to multicore:
>
> Enter Grand Central Dispatch, or GCD. This Snow Leopard component is  
> designed to minimize many of the difficulties of parallel  
> programming. It's easy to modify existing software to use GCD, Apple  
> said, and the operating system handles complicated administrative  
> chores so programmers don't have to.
>
> ...
>
> The core mechanisms within GCD are blocks and queues. Programmers  
> mark code chunks to convert them into blocks, then tells the  
> application how to create the queue that governs how those blocks  
> are actually run. Block execution can be tied to specific events-- 
> the arrival of network information, a change to a file, a mouse click.
> Apple hopes programmers will like blocks' advantages: Older code can  
> easily be retrofitted with blocks so programmers can try it without  
> major re-engineering; they're lightweight and don't take up  
> resources when they're not running; and they're flexible enough to  
> encapsulate large or small parts of code.
>
> "There's a lot of overhead around threading that means you want to  
> break your program into as few pieces as possible. With Grand  
> Central Dispatch, we say break your program into as many tiny pieces  
> as you can conceive of," Hodges said.
>
> Another difference with the Grand Central Dispatch approach is its  
> centralization. The operating system worries about managing all  
> applications' blocks rather than each application providing its own  
> oversight. That central view means the operating system decides  
> which tasks get which resources, Apple said, and that the system  
> overall can become more responsive even when it's busy.
>
> Link: http://news.cnet.com/8301-30685_3-10319839-264.html (via  
> shareaholic)
>
> Anybody know more?
>
> --Mitch
>
>
> _______________________________________________
> PRL mailing list
> PRL at lists.ccs.neu.edu
> https://lists.ccs.neu.edu/bin/listinfo/prl

-------------- next part --------------
HTML attachment scrubbed and removed


More information about the PRL mailing list