Run N parallel jobs in powershell


The Start-Job cmdlet allows you to run code in the background. To do what you’d ask, something like the code below should work. Use Wait-Job -Any to emulate throttle.

foreach ($server in $servers) {
    $running = @(Get-Job | Where-Object { $_.State -eq 'Running' })
    if ($running.Count -le 8) {
        Start-Job {
             Add-PSSnapin SQL
             $list = invoke-sqlcmd 'exec getOneMillionRows' -Server...
    } else {
         $running | Wait-Job
    Get-Job | Receive-Job

Throttling Go routines


Yes, it’s complicated, But there are a couple of rules of thumb that should make things feel much more straightforward.

  • prefer using formal arguments for the channels you pass to go-routines instead of accessing channels in global scope. You can get more compiler checking this way, and better modularity too.
  • avoid both reading and writing on the same channel in a particular go-routine (including the ‘main’ one). Otherwise, deadlock is a much greater risk.

Here’s an alternative version of your program, applying these two guidelines. This case demonstrates many writers & one reader on a channel:

c := make(chan string)

for i := 1; i <= 5; i++ {
    go func(i int, co chan<- string) {
        for j := 1; j <= 5; j++ {
            co <- fmt.Sprintf("hi from %d.%d", i, j)
    }(i, c)

for i := 1; i <= 25; i++ {

It creates the five go-routines writing to a single channel, each one writing five times. The main go-routine reads all twenty five messages – you may notice that the order they appear in is often not sequential (i.e. the concurrency is evident).

This example demonstrates a feature of Go channels: it is possible to have multiple writers sharing one channel; Go will interleave the messages automatically.

The same applies for one writer and multiple readers on one channel, as seen in the second example here:

c := make(chan int)
var w sync.WaitGroup

for i := 1; i <= 5; i++ {
    go func(i int, ci <-chan int) {
        j := 1
        for v := range ci {
            fmt.Printf("%d.%d got %d\n", i, j, v)
            j += 1
    }(i, c)

for i := 1; i <= 25; i++ {
    c <- i

This second example includes a wait imposed on the main goroutine, which would otherwise exit promptly and cause the other five goroutines to be terminated early (thanks to olov for this correction).

In both examples, no buffering was needed. It is generally a good principle to view buffering as a performance enhancer only. If your program does not deadlock without buffers, it won’t deadlock with buffers either (but the converse is not always true). So, as another rule of thumb, start without buffering then add it later as needed.

Another explanation:

Go routines exec throttling:

Sends and Receives are blocking by default:

Golang – What is channel buffer size?


The buffer size is the number of elements that can be sent to the channel without the send blocking. By default, a channel has a buffer size of 0 (you get this with make(chan int)). This means that every single send will block until another goroutine receives from the channel. A channel of buffer size 1 can hold 1 element until sending blocks, so you’d get

c := make(chan int, 1)
c <- 1 // doesn't block
c <- 2 // blocks until another goroutine receives from the channel

Variable captured by func literal


it’s a common mistake for new comers in Go, and yes the var currentProcess changes for each loop, so your goroutines will use the last process in the slice l.processes, all you have to do is pass the variable as a parameter to the anonymous function, like this:

func (l *Loader) StartAsynchronous() []LoaderProcess {

    for ix := range l.processes {

        go func(currentProcess *LoaderProcess) {

            cmd := exec.Command(currentProcess.Command, currentProcess.Arguments...)
            log.LogMessage("Asynchronously executing LoaderProcess: %+v", currentProcess)

            output, err := cmd.CombinedOutput()
            if err != nil {
                log.LogMessage("LoaderProcess exited with error status: %+v\n %v", currentProcess, err.Error())
            } else {
                log.LogMessage("LoaderProcess exited successfully: %+v", currentProcess)

            time.Sleep(time.Second * TIME_BETWEEN_SUCCESSIVE_ITERATIONS)

        }(&l.processes[ix]) // passing the current process using index


    return l.processes

Another example :

How should open source programmers make money?

Different people split this up different ways, but here’s my list of software models:

  • The Jacquard loom model.
  • The early research model
  • The early IBM model
  • The AT&T forced model
  • The late research model (Copycenter)
  • The proprietary model (Copyright)
  • The shareware model
  • The GPL model (Copyleft)
  • The Mozilla model
  • The modern tactical model

Each of these models has (or had) their benefits and drawbacks. The order I’ve arranged them in is roughly their historical order.

So back to the original question…

Individual programmers can profit from Open Source by:

  • Fame/exposure; generally most people can’t take this route, and it only gets you hired some places
  • Being paid by a company to develop it
  • Writing books about it
  • Getting grants/stock options over it (Linus Torvalds became very rich this way; however: you are not Linus)

    Leveraging Open Source as part of a project (tactical code); bonus: you offload ongoing support and maintenance onto the community, so long as your contributions actually provide value to them

  • They build reputation in the community
  • They have something to point to as work they’ve done
  • They build their resume

What is an example of something true that nobody generally wants to admit?

“Success is procured from sabotaging someone else’s happiness.”

Getting a job for you means no job for someone else, getting a Top tier college seat for you leaves many of your own friends jealous and unhappy. Robots’ invention left common laborers jobless. Success of Flipkart, Amazon, Walmart, and Reliance took away the ownership of countless small scale businessmen.

inessential – Social Media Mobs


  • “It felt like a mob. The mob wasn’t in any way self-aware or coordinated — but it still felt like a single driven thing.”
  • “it was over my own software and a mistake”
  • “The mob never apologized. Mobs never do.”
  • “For a long time after everyone forgot.”
  • “So I’m thinking about whether or not to stay.”


2009年,作者因为自己写的软件里有一两个错误而在 Twitter 上招到网络暴力,他甚至考虑推出科技圈,不编程了。网络暴徒们其实不会持久地喷你,他们很快会有新的攻击对象,很快会忘掉你的。

只要你有一定的关注度,比如你的微博账号开始有几千个粉丝了,必然要招到别人毫无理由的辱骂。有人喜欢你,肯定有人讨厌你。这是一个 package,好的坏的都得一起接受。别问我是怎么知道的:)经验之谈啊~

Photo map, center synced with GPS

Module does the following
1. Extract GPS data from photos in img/<tripname> directory, if no gps data found for any photo, use previous photo’s value, sorted by name (in effect date time)
2. Use scikit-learn to cluster points, algorithm used is DBSCAN, epislon is 10 km/6371.0088km per radian, minimum cluster size 1, gps data is converted to radian before computation.
3. Use API to convert clustered points (marker point, first photo in cluster is used, no centroid computation) to location names.
4. Use template.html to generate <tripname>.html with the necessary javascript, HTML.


GPS data extraction

Scikit-learn DBSCAN

JQuery scroll into view animaiton