Are you concerned about multicore? [closed]

2020-02-10 04:22发布

This is undeniable: multicore computers are here to stay.

So is this: efficient multicore programming is pretty difficult. It's not just a case of understanding pthreads.

This is arguable: the 'developer on the street' need concern him/herself with these developments.

To what extent are you concerned about having to expand your skillset for multicore? Is the software you are writing a candidate for parallelisation, and if so are you doing anything to educate yourself (if you didn't already know the techniques)? Or do you believe that the operating system will take care of most of it, the language runtime will do its bit and your application will happily sit on one core and let the others do their thing?

标签: multicore
20条回答
2楼-- · 2020-02-10 04:45

Just a side note: If your app has a GUI and does intense computation, ALWAYS do your intense computation on a separate thread. Forgetting to do this is why GUIs freeze up.

查看更多
Ridiculous、
3楼-- · 2020-02-10 04:51

Yeah, I've been programming with threads, too. But I'm not masochistic enough to love them. It's still way too easy to get cross-talk between threads, no matter how much of a super-man you are, plus whatever help you get from coworkers. Threads are easy to do, but very difficult to do correctly, so of course Joe-Schmoe gravitates to it, plus, they're fast! (which is all that matters, of course)

On *nix, good old fork() is still a good way to go for many things. The overhead is not too bad (yes, I'll need to measure that to back up my BS some day), particularly if you are forking an interpreter, then generating a bunch of task specific data in the child process.

That said, child processes are hideously expensive on Windoze, I'm told. So the Erlang approach is looking pretty good: force Joe Schmoe to write pure functions and use message passing instead of his seemingly-infinite-state automata global (instance) variable whack-fest with bonus thread cross-talk extravaganza.

But I'm not bitter :-)

Revision / comment:

Excellent comment elsewhere about distance-to-memory. I had been thinking about this quite a bit recently as well. Mark-and-sweep garbage collection really hurts the "locality" aspect of running processes. M/S GC on 0 wait state RAM on an old 80286 may have seemed harmless, but it really hurts on multi-level caching architectures. Maybe referencing counting + fork/exit isn't such a bad idea as a GC implementation in some cases?


edit: I put some effort into backing up my talk here (results vary): http://roboprogs.com/devel/2009.04.html

查看更多
做自己的国王
4楼-- · 2020-02-10 04:51

I think this is a great question. So, I've begun a series of blog posts about it here.

Dmckee's answer is correct in the narrowest sense. Let me rephrase in my own words here, implicitly including some of the comments:

There is no value in parallelizing operations that are not CPU bound. There is little value in parallelizing operations that are only CPU bound for short periods of time, say, less than a few hundred milliseconds. Indeed, doing so will most likely cause a program to be more complex, and buggy. Learning how to implement fine grained parallelism is complicated and doing it well is difficult.

That is true as far as it goes, but I belive the answer is richer for a broader set of programs. Indeed, There are many reasons to use multi-threaded, and then implicitly multi-core techniques in your production applications. For example, it is a huge benefit to your users to move disk and network I/O operations off your user interface thread.

This has nothing to do with increasing the throughput of compute bound operations, and everything to do with keeping a program's user interface responsive. Note, you don't need a graphical UI here - command line programs, services, and server based applications, can benefit for this as well.

I completely agree that taking a CPU bound operation and paralyzing it can often be a complex task - requiring knowledge of fine grained synchronization, CPU caching, CPU instruction pipelines, etc. etc. Indeed, this can be classically 'hard'.

But, I would argue that the need to do his is rare; there are just not that many problems that need this kind of fine grained parallelism. Yes! they do exist and you may deal this this every day, but I would argue that in the day to day life of most developers, this is pretty rare.

Even so, there are good reasons to learn the fundamentals of multi-threaded, and thus multi-core development.

  1. It can make your program more responsive from a user perspective by moving longer operations off the message loop thread.
  2. Even for things that are not CPU bound, it can often make sense to do them in parallel.
  3. It can break up complex single threaded state machines into simpler, more procedural code.

Indeed, the OS already does a lot for you here, and you can use libraries that are multi-core enabled (like Intel's stuff). But, operating systems and libraries are not magic - I argue that it is valuable for most develops to learn the basics of multi-threaded programming. This will let you write better software that your users are happier with.

Of course, not every program should be multi-threaded, or multi-core enabled. It is just fine for some things to be implemented in a simple single threaded manner. So, don’t take this as advice that every program should be multi-threaded – use your own good judgment here. But, it can often be a valuable technique and very beneficial in many regards. As mentioned above, I plan on blogging about this a bit starting here. Feel free to follow along and post comments there as you feel inclined

查看更多
甜甜的少女心
5楼-- · 2020-02-10 04:52

Dataflow programming shows some promise for a relatively easy solution to the multicore problem.

As wikipedia says, though, it requires a fairly major paradigm shift, which seems to prevent its easy adoption by the programming community.

查看更多
兄弟一词,经得起流年.
6楼-- · 2020-02-10 04:53

No. I feel that multicore will make a significant difference in certain areas of programming but will barely affect other areas. After a while the areas it does will absorb it and encapsulate it and the hype will barely touch the other areas.

查看更多
劳资没心,怎么记你
7楼-- · 2020-02-10 04:55

No, I'm not worried.

My work is a little unusual and possibly parallelises more easily than average, but regardless I see it as more of an opportunity than a problem.

Partly I'm impatient for things to get to the point where it's really worth optimising for multicore. I don't know what the exact numbers are at the moment, but it seems like half our clients have a single-core machine, 49% have dual core and maybe 1% have quad. That means that multithreading doesn't really give that a huge performance gain in most cases and hence isn't really worth spending much time on.

In a few years time, when the average might be quad-core, there's going to be a lot more case for spending a bit of time on clever multithreading code - which I think is going to be a good thing for us developers. All we need is for Intel and AMD to hurry up and make more of them... :-)

查看更多
登录 后发表回答