8 thoughts on “Lodash is evolving in a good direction.

  1. Interesting tho in all my years I have never needed to do things that methods “chunk” and “compact” do (just briefly glancing at docs)

    _.compact(array)

    # Ⓢ Ⓝ

    Creates an array with all falsey values removed. The values false, null, 0, “”, undefined, and NaN are falsey.

    Like

  2. It’s never too late to start!

    I use a subset of lodash heavily. map, reduce, filter, transform, head, tail, keyBy 

    The thing here is that lodash is becoming more like other functional libraries (and languages) which means you can compose the functions more simply.

    If you are writing loops and using a lot of control structures, I’m pretty convinced you could write better code with generic functions which would be easier to read and involve a lot less debugging.

    Like

  3. I use it client side. It’s not really big data, I’m just trying to avoid endlessly re-inventing the wheel. Most of what programmers do is boilerplate and bugs. 

    Like

  4. Michael Tufekci: I’d speculate that using such an abstract _.compact rather than some form of grep is mostly useful when you program APL style, commonly invoking vector or matrix operations that return sparse arrays.  In imperative style, you’d write a loop instead, and in functional style, you’d often do the filtering before getting a sparse array.

    Like

  5. Andres Soolo Makes me nervous deleting all the zeros from numerical arrays because data could inadvertently be corrupted, but obviously a market for this method. More curiosity than anything. 

    Like

  6. Michael Tufekci: Imagine calling an operator that calculates a Boolean operation for an array.  In APL style, you’ll get an array of Boolean values.  You might now multiply it, element-wise, with the original array, getting an array in which the positions with ‘true’ in the Boolean array hold the original value, and positions with ‘false’ hold a zero.  Then, when you “compact” it, you will effectively have called a filter operation.

    This makes sense for some kinds of mathematical abstractions, and for the once-popular superscalar supercomputers that were very fast when doing one and the same thing in a tight loop — it was called vector processing —, but lost much of their performance advantage if they had to stop in the middle of the loop and think which way to go from there.  To some degree, all modern pipelined processors suffer from this, but without massive vectorisation, the drawback is smaller, and, well, a modern consumer processor’s computing time is much cheaper than supercomputer time.

    Like

Leave a reply to Michael Tufekci Cancel reply