Skip to content

Comparison of Haskell's bright and shiny Parallel & Concurrency support, with runtime stats and comments on algorithm flow

Notifications You must be signed in to change notification settings

edahlgren/Threads-to-Sparks-to-Fireworks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 

Repository files navigation

This is a documentation of Parallel and Concurrent Haskell techniques.
The broad goal is to build up a Wide and Deep repository of pitfalls and profiles.

The computation goal is to reduce the amount of interwoven sequential computation and IO overhead.

The data I'm ultimately interested in is both hierarchical and large in scale.
1. I start simple with length of linked list functions, 
   which perform poorly when parallelized.
   
   This comes at no suprise: linked lists have good locality and 
   (oftentimes) low density (flat).  Parallelization breaks up this locality.  
   The resulting garbage collection overhead of breaking up the list into 
   parallelizable chunks creates too much of a price.

   Test modules broken into 'gold', 'magenta', and 'red' bins:
   'red' tests performance over flat linked lists of size 'end'
   'magenta' tests performance over linked lists constant size nested lists
   'gold' tests performance over linked lists with variable size nested lists
	       Visualization -> ThreadScope

About

Comparison of Haskell's bright and shiny Parallel & Concurrency support, with runtime stats and comments on algorithm flow

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published