Functional Programming

A few new papers

Planet Haskell - 18 hours 15 min ago
I’ve just updated my web page with links to some new papers that are now available: “Homotopical Patch Theory” by Carlo Angiuli, Ed Morehouse, Dan Licata, and Robert Harper. To appear, ICFP, Gothenburg, October 2014. We’ve also prepared a slightly expanded version with a new appendix containing material that didn’t make the cut for ICFP. […]

EclipseFP 2.6.1 released!

Planet Haskell - 18 hours 15 min ago
I've just released EclipseFP 2.6.1. EclipseFP is a set of Eclipse plugins for Haskell development. This is a bug fixing release, mainly for GHC 7.8 support.Release notes can be found here.As usual, download from http://eclipsefp.sf.net/updates.Happy Haskell Hacking!

CTO / Tech Co-Founder at Capital Match (Full-time)

Planet Haskell - 18 hours 15 min ago
TL;DR: start and build the technology for a financial services marketplace in Asia. Compensation is salary plus double digit percent equity. There will be a short trial period to make sure both sides want to work with each other. Relocation to Singapore mandatory (trial could be remote and part-time). Company Capital Match is bringing peer-to-peer lending (basically, a marketplace for retail/institutional lenders and corporate borrowers that bypasses the banking system) to Southeast Asia, where for various reasons the US incumbents have not entered. The founders are well connected and are bringing the right contacts and background to make it happen. The company started as a traditional financier for SMEs to better understand the market as well as legal and credit aspects of the business before it would embark on the P2P model. If you would like to learn more about the business model, here is a link explaining it from the point of view of current very successful US incumbents: http://www.foundationcapital.com/downloads/FoundationCap_MarketplaceLendingWhitepaper.pdf Job description and compensation The CTO will first build the marketplace, then grow the team as it gains traction. We provide the legal, financial and admin functions as well as the market research backing a high level functional spec; you just need to worry about building the product. The division of labour will be very clear: you are the final call on anything technical, and nobody will come micromanage your work. Compensation will be a lowish middle class salary by Singapore standards and double digit percent equity, subject to a trial period. Note this is not a strictly technical business, and the marketplace problem is a relatively straightforward and well known one, with the value in the contacts and understanding of the market that goes into the functional spec. Though technology could bring a distinct value and advantage over time. Additionally, we have eschewed raising much funding for now and most of the capital comes from the founders' personal savings (which we think is a positive signal - our interests are aligned) so don't expect Silicon Valley perks for a while. We don't have hog roasts and whisky tasting Fridays, but you get a real, founder-level stake in the company. Relocation to Singapore is primordial for the CTO, although the rest of the team you'll build can be remote. During a trial period you can work remotely and part-time. Tech stack Thanks to one founder's very positive experiences with the Haskell experiment at Zalora, we are very keen to use functional programming languages, especially Haskell. We are however technology agnostic ("best stack for the problem"). We have a bias towards those who prefer the relational model over NoSQL and towards open source. Desired experience The CV matters less than your ability to build things, so please send us any major open source project you have authored, both a link to the repo and a "basic" description targeted at the non-technical founders. We would prefer to see some financial services experience, especially on the security side, and some experience building similar products would be even better. We want to utilize ample local government funding for high-tech start-ups so scientific / high-tech background and a post-grad degree would be preferred. You can attempt to apply without an open source repo to your name, in that case build us a demonstration of your skills that you think reflects your ability. Please send your application to pawel [at] capital-match [dot] com Get information on how to apply for this position. 2014-07-22T06:27:49Z

Parallelism and Concurrency, Revisited

Planet Haskell - 18 hours 15 min ago
To my delight, I still get compliments on and criticisms of my post from three years ago (can it possibly be that long?) on parallelism and concurrency.  In that post I offered a “top down” argument to the effect that these are different abstractions with different goals: parallelism is about exploiting computational resources to maximize efficiency, concurrency is […]

Meditations on Using Haskell

Planet Haskell - 18 hours 15 min ago
Bitemyapp - Meditations on Using Haskell explains why and how those in the trenches use Haskell, by quoting from conversations on an IRC channel.ESo when i found haskell i slingshotted off through dependent and substructural types. Assuming that if a little was good a lot was better. Made it half way through TaPL and found pure type systems, coq, etc.I think the power to weight ratio isn’t there. I find that Haskell gives amazingly expressive types that have amazingpower for the amount of code you tie up in them and that are very resistant to refactoring.If i write agda and refactor I scrap and rewrite everything. If i write haskell, and get my tricky logic bits right?I can refactor it, split things up into classes, play all the squishy software engineering games to get a nice API I want. And in the end if it still compiles I can trust I didn’t screw up the refactoring with a very high degree of assurance.CAdmittedly I’m not playing at the level E is, but this was my experience. I can make sweeping changes to my API, get all the bugs caught by the type system, and still have minimal code impact.BThat is what I was getting at with the tweet about not using dynamically typed langs because I need to be able to prototype quickly and get rapid feedback.I think a lot of my friends thought i was just being trollish. Even just being able to see what would have to change if you changed your design slightly and being able to back it out quickly…

That's totes my Bag!

Planet Haskell - 18 hours 15 min ago
So, does that mean I like tote-bags?So, today's question on @1HaskellADay was this:write a function countOccurences :: [Stirng] -> Map Char Int(typos faithfully reproduced)such thatlookup 'l' $ countOccurences "Hello" ~> Just 2lookup 'q' $ countOccurences "Hello" ~> NothingOkay, that can be done easily enough, I suppose, by torquing Map into something that it isn't, so one gets wrapped around the axel of creating a mapping from characters to occurrences.But why?First of all, countOccurences maps a String (not a List of Strings) to a Map, and that map is a very specialized kind of map that has existed in the literature for quite a while, and that map is known as the Bag data type, and is also, nowadays, called the MultiSet by people too embarrassed to say the word 'bag' in a sentence, because of their prior drug convictions.("I got two months for selling a dime bag.")So they now twist the word 'Set' (a 'collection of unique objects') to mean something that's not a set at all, the 'Multi'Set, which is a 'collection of unique objects, but you can have multiples of these unique objects, so they're not unique at all, so it isn't a set at all, but we need to say the word 'set' because we can't say the word 'bag' because saying the word 'bag' would make us sound plebeian for some reason.'Yeah, that. 'MultiSet.'What. Ev. Er.But I digress.As always.So I COULD write the countOccurences as a String -> Map Char Int function, but then: why bother? You can either write tons of algorithmic code that obscures the intent or just simply use the appropriate data type.I went for the latter.Now, I wuz gonna do a dependently-typed pair to represent an occurrence...... notice how countOccurences is so badly misspelled, by the way?SOMEbody didn't QA-check their problem for the day today, I'm thinking.... but then I said: 'eh!'I mean: WHY is lookup 'q' $ countOccurences "Hello" ~> Nothing?WHY can't it be that count 'q' for a Bag Char representation of "Hello" be 0? 0 is a valid answer and it keeps everything nice and monoidal without having to lift everything unnecessarily into the monadic domain.So, yeah. Let's do that, instead.So, here we go, and in Idris, because that's how I'm rolling these days. The advantages of dependent types have been enumerated elsewhere, so we'll just go with that they're better as an assumption and move on, using them, instead of extolling them, in this post.Wrong!So, my first attempt at Bag crashed and burned, because I did this:data Bag : (x : Type) -> Type where    add : Bag x -> x -> Bag x    emptyBag : Bag xand the compiler was fine with that. Hey, I can declare any type I'd like, so long as the types just stay as types, but as soon as I tried to define these things:emptyList : List xemptyList = []emptyBag = Bag emptyListadd (Bag []) x = Bag [(x, 1)]add (Bag ((x, y) :: rest)) x = Bag ((x, y + 1) :: rest)add (Bag ((z, y) :: rest)) x = Bag ((z, y) :: (add rest x))The compiler looked at me and asked: 'geophf, what in tarnation are you-ah tryin' to do?'And about the only intelligent answer I could muster was: 'Ummmm... idk.'I had gotten too clever for myself by half, trying to reshape a data type you learn in Comp.Sci. 101 as a purely functional type.Back to Basics ... (but not BASIC)So, let's just declare Bag to be what it is and KISS: 'keep it simple, stupid!'Yes, let's.data Bag x = Air | Stuffed (x, Nat) (Bag x)Now, I so totally could've gone with the balanced binary-tree representation instead of the simple and standard linked list, but, you know: 'live and learn!'With this declaration the emptyBag becomes so trivial as to be unnecessary, and then add is simplicity, itself, too, but add is, either way, so that's not saying much.add : Eq x => Bag x -> x -> Bag xadd Air x = Stuffed (x, 1) Airadd (Stuffed (z, y) rest) x =    case x == z of        True  => Stuffed (x, y + 1) rest        False => Stuffed (z, y) (add rest x)Now, you see me relying on the case-statement, here. Unhappily.I'd like my dependent types to say, 'unify x with x (reflexive) for the isomorphic case, and don't unify x with z for the other case.' But we're not there yet, or my coding isn't on par with being there yet, so I forced total coverage bifurcating the result-set into isomorphic and not with a hard-case statement.Ick. I hate explicit case-statements! Where is really, really, really smart pattern-matching when I need it?But with add, constructing a Bag becomes easy, and then counting elements of that bag is easy, too (again, with another case-statement, sigh!):count : Eq x => x -> Bag x -> Natcount _ Air = 0count x (Stuffed (z, y) rest) =    case x == z of        True  => y        False => count x restcountOccurences (with one-too-few 'r's in the function name) becomes easy, given the Bag data type:countOccurences : String -> Bag CharcountOccurences str = co' (unpack str) where    co' [] = Air    co' (char :: rest) = add (co' rest) charYAWN!But look at this:depth : Bag x -> Natdepth Air = 0depth (Stuffed _ rest) = 1 + depth restsample : ?bagsample = countOccurences "The quick, brown fox jumped over the lazy dog."bag = proof searchWhen we do a depth sample, we get the not-surprising answer of 29 : NatPerhaps this could be made a tad bit more efficient?Just perhaps.Well, then, let's do that!data Bag x = Air | Stuffed (x, Nat) (Bag x) (Bag x)We make Bag balanced, with the add-function, doing the work of (very simply) branching off new nodes:add : Ord x => Bag x -> x -> Bag xadd Air x = Stuffed (x, 1) Air Airadd (Stuffed (z, y) less more) x =    case (compare x z) of        LT => Stuffed (z, y) (add less x) more        GT => Stuffed (z, y) less (add more x)        EQ => Stuffed (z, y + 1) less moreThen all the other functions change ('morph') to work with a tree, not a list and work with Ord elements, not with (simply) Eq ones.And so, the redefined depth-function gives a very different result:depth sample ~> 9 : NatNot bad! Not bad! The improved data-structure improves efficiency across the board from O(N) to O(log N).Hm, perhaps I'll have count return a dependently-typed pair, just as the library function filter does on List types, but not tonight.Good night, Moon!

BuildWrapper/EclipseFP and GHC 7.8

Planet Haskell - 18 hours 15 min ago
I've been working on some issues related to GHC 7.8 in BuildWrapper and EclipseFP. On the EclipseFP side, mainly the quickfixes are affected, because EclipseFP parses the GHC error messages to offer them, and the quotes characters have changed in the GHC 7.8 messages.On the BuildWrapper side, things are more complex. Adapting to API changes wasn't a big deal, but it seems that GHC bugs involving the GHC API, static linking and other unknowns cause some things to break. The solution I've found was to build BuildWrapper with the -dynamic flag. But I couldn't upload this to hackage because Cabal thinks that -dynamic is a debug flag (it starts with d). I've sent a bug fix to Cabal, so in the next release that'll be fixed. So if you're using GHC 7.8 and BuildWrapper, you may want to rebuild the executable with -dynamic (uncomment the relevant line in the cabal file).Note: BuildWrapper comes with a comprehensive test suite (90 tests covering all aspects). So you can always build the tests and run them to ensure everyting is OK on your system.Happy Haskell Hacking!

Equational reasoning at scale

Planet Haskell - 18 hours 15 min ago
Haskell programmers care about the correctness of their software and they specify correctness conditions in the form of equations that their code must satisfy. They can then verify the correctness of these equations using equational reasoning to prove that the abstractions they build are sound. To an outsider this might seem like a futile, academic exercise: proving the correctness of small abstractions is difficult, so what hope do we have to prove larger abstractions correct? This post explains how to do precisely that: scale proofs to large and complex abstractions.Purely functional programming uses composition to scale programs, meaning that:We build small components that we can verify correct in isolationWe compose smaller components into larger componentsIf you saw "components" and thought "functions", think again! We can compose things that do not even remotely resemble functions, such as proofs! In fact, Haskell programmers prove large-scale properties exactly the same way we build large-scale programs:We build small proofs that we can verify correct in isolationWe compose smaller proofs into larger proofsThe following sections illustrate in detail how this works in practice, using Monoids as the running example. We will prove the Monoid laws for simple types and work our way up to proving the Monoid laws for much more complex types. Along the way we'll learn how to keep the proof complexity flat as the types grow in size.MonoidsHaskell's Prelude provides the following Monoid type class:class Monoid m where mempty :: m mappend :: m -> m -> m-- An infix operator equivalent to `mappend`(<>) :: Monoid m => m -> m -> mx <> y = mappend x y... and all Monoid instances must obey the following two laws:mempty <> x = x -- Left identityx <> mempty = x -- Right identity(x <> y) <> z = x <> (y <> z) -- AssociativityFor example, Ints form a Monoid:-- See "Appendix A" for some caveatsinstance Monoid Int where mempty = 0 mappend = (+)... and the Monoid laws for Ints are just the laws of addition:0 + x = xx + 0 = x(x + y) + z = x + (y + z)Now we can use (<>) and mempty instead of (+) and 0:>>> 4 <> 26>>> 5 <> mempty <> 510This appears useless at first glance. We already have (+) and 0, so why are we using the Monoid operations?Extending MonoidsWell, what if I want to combine things other than Ints, like pairs of Ints. I want to be able to write code like this:>>> (1, 2) <> (3, 4)(4, 6)Well, that seems mildly interesting. Let's try to define a Monoid instance for pairs of Ints:instance Monoid (Int, Int) where mempty = (0, 0) mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)Now my wish is true and I can "add" binary tuples together using (<>) and mempty:>>> (1, 2) <> (3, 4)(4, 6)>>> (1, 2) <> (3, mempty) <> (mempty, 4)(4, 6)>>> (1, 2) <> mempty <> (3, 4)(4, 6)However, I still haven't proven that this new Monoid instance obeys the Monoid laws. Fortunately, this is a very simple proof.I'll begin with the first Monoid law, which requires that:mempty <> x = xWe will begin from the left-hand side of the equation and try to arrive at the right-hand side by substituting equals-for-equals (a.k.a. "equational reasoning"):-- Left-hand side of the equationmempty <> x-- x <> y = mappend x y= mappend mempty x-- `mempty = (0, 0)`= mappend (0, 0) x-- Define: x = (xL, xR), since `x` is a tuple= mappend (0, 0) (xL, xR)-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)= (0 + xL, 0 + xR)-- 0 + x = x= (xL, xR)-- x = (xL, xR)= xThe proof for the second Monoid law is symmetric-- Left-hand side of the equation= x <> mempty-- x <> y = mappend x y= mappend x mempty-- mempty = (0, 0)= mappend x (0, 0)-- Define: x = (xL, xR), since `x` is a tuple= mappend (xL, xR) (0, 0)-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)= (xL + 0, xR + 0)-- x + 0 = x= (xL, xR)-- x = (xL, xR)= xThe third Monoid law requires that (<>) is associative:(x <> y) <> z = x <> (y <> z)Again I'll begin from the left side of the equation:-- Left-hand side(x <> y) <> z-- x <> y = mappend x y= mappend (mappend x y) z-- x = (xL, xR)-- y = (yL, yR)-- z = (zL, zR)= mappend (mappend (xL, xR) (yL, yR)) (zL, zR)-- mappend (x1, y1) (x2 , y2) = (x1 + x2, y1 + y2)= mappend (xL + yL, xR + yR) (zL, zR)-- mappend (x1, y1) (x2 , y2) = (x1 + x2, y1 + y2)= mappend ((xL + yL) + zL, (xR + yR) + zR)-- (x + y) + z = x + (y + z)= mappend (xL + (yL + zL), xR + (yR + zR))-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)= mappend (xL, xR) (yL + zL, yR + zR)-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)= mappend (xL, xR) (mappend (yL, yR) (zL, zR))-- x = (xL, xR)-- y = (yL, yR)-- z = (zL, zR)= mappend x (mappend y z)-- x <> y = mappend x y= x <> (y <> z)That completes the proof of the three Monoid laws, but I'm not satisfied with these proofs.Generalizing proofsI don't like the above proofs because they are disposable, meaning that I cannot reuse them to prove other properties of interest. I'm a programmer, so I loathe busy work and unnecessary repetition, both for code and proofs. I would like to find a way to generalize the above proofs so that I can use them in more places.We improve proof reuse in the same way that we improve code reuse. To see why, consider the following sort function:sort :: [Int] -> [Int]This sort function is disposable because it only works on Ints. For example, I cannot use the above function to sort a list of Doubles.Fortunately, programming languages with generics let us generalize sort by parametrizing sort on the element type of the list:sort :: Ord a => [a] -> [a]That type says that we can call sort on any list of as, so long as the type a implements the Ord type class (a comparison interface). This works because sort doesn't really care whether or not the elements are Ints; sort only cares if they are comparable.Similarly, we can make the proof more "generic". If we inspect the proof closely, we will notice that we don't really care whether or not the tuple contains Ints. The only Int-specific properties we use in our proof are:0 + x = xx + 0 = x(x + y) + z = x + (y + z)However, these properties hold true for all Monoids, not just Ints. Therefore, we can generalize our Monoid instance for tuples by parametrizing it on the type of each field of the tuple:instance (Monoid a, Monoid b) => Monoid (a, b) where mempty = (mempty, mempty) mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)The above Monoid instance says that we can combine tuples so long as we can combine their individual fields. Our original Monoid instance was just a special case of this instance where both the a and b types are Ints.Note: The mempty and mappend on the left-hand side of each equation are for tuples. The memptys and mappends on the right-hand side of each equation are for the types a and b. Haskell overloads type class methods like mempty and mappend to work on any type that implements the Monoid type class, and the compiler distinguishes them by their inferred types.We can similarly generalize our original proofs, too, by just replacing the Int-specific parts with their more general Monoid counterparts.Here is the generalized proof of the left identity law:-- Left-hand side of the equationmempty <> x-- x <> y = mappend x y= mappend mempty x-- `mempty = (mempty, mempty)`= mappend (mempty, mempty) x-- Define: x = (xL, xR), since `x` is a tuple= mappend (mempty, mempty) (xL, xR)-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)= (mappend mempty xL, mappend mempty xR)-- Monoid law: mappend mempty x = x= (xL, xR)-- x = (xL, xR)= x... the right identity law:-- Left-hand side of the equation= x <> mempty-- x <> y = mappend x y= mappend x mempty-- mempty = (mempty, mempty)= mappend x (mempty, mempty)-- Define: x = (xL, xR), since `x` is a tuple= mappend (xL, xR) (mempty, mempty)-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)= (mappend xL mempty, mappend xR mempty)-- Monoid law: mappend x mempty = x= (xL, xR)-- x = (xL, xR)= x... and the associativity law:-- Left-hand side(x <> y) <> z-- x <> y = mappend x y= mappend (mappend x y) z-- x = (xL, xR)-- y = (yL, yR)-- z = (zL, zR)= mappend (mappend (xL, xR) (yL, yR)) (zL, zR)-- mappend (x1, y1) (x2 , y2) = (mappend x1 x2, mappend y1 y2)= mappend (mappend xL yL, mappend xR yR) (zL, zR)-- mappend (x1, y1) (x2 , y2) = (mappend x1 x2, mappend y1 y2)= (mappend (mappend xL yL) zL, mappend (mappend xR yR) zR)-- Monoid law: mappend (mappend x y) z = mappend x (mappend y z)= (mappend xL (mappend yL zL), mappend xR (mappend yR zR))-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)= mappend (xL, xR) (mappend yL zL, mappend yR zR)-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)= mappend (xL, xR) (mappend (yL, yR) (zL, zR))-- x = (xL, xR)-- y = (yL, yR)-- z = (zL, zR)= mappend x (mappend y z)-- x <> y = mappend x y= x <> (y <> z)This more general Monoid instance lets us stick any Monoids inside the tuple fields and we can still combine the tuples. For example, lists form a Monoid:-- Exercise: Prove the monoid laws for listsinstance Monoid [a] where mempty = [] mappend = (++)... so we can stick lists inside the right field of each tuple and still combine them:>>> (1, [2, 3]) <> (4, [5, 6])(5, [2, 3, 5, 6])>>> (1, [2, 3]) <> (4, mempty) <> (mempty, [5, 6])(5, [2, 3, 5, 6])>>> (1, [2, 3]) <> mempty <> (4, [5, 6])(5, [2, 3, 5, 6])Why, we can even stick yet another tuple inside the right field and still combine them:>>> (1, (2, 3)) <> (4, (5, 6))(5, (7, 9))We can try even more exotic permutations and everything still "just works":>>> ((1,[2, 3]), ([4, 5], 6)) <> ((7, [8, 9]), ([10, 11), 12)((8, [2, 3, 8, 9]), ([4, 5, 10, 11], 18))This is our first example of a "scalable proof". We began from three primitive building blocks:Int is a Monoid[a] is a Monoid(a, b) is a Monoid if a is a Monoid and b is a Monoid... and we connected those three building blocks to assemble a variety of new Monoid instances. No matter how many tuples we nest the result is still a Monoid and obeys the Monoid laws. We don't need to re-prove the Monoid laws every time we assemble a new permutation of these building blocks.However, these building blocks are still pretty limited. What other useful things can we combine to build new Monoids?IOWe're so used to thinking of Monoids as data, so let's define a new Monoid instance for something entirely un-data-like:-- See "Appendix A" for some caveatsinstance Monoid b => Monoid (IO b) where mempty = return mempty mappend io1 io2 = do a1 <- io1 a2 <- io2 return (mappend a1 a2)The above instance says: "If a is a Monoid, then an IO action that returns an a is also a Monoid". Let's test this using the getLine function from the Prelude:-- Read one line of input from stdingetLine :: IO StringString is a Monoid, since a String is just a list of characters, so we should be able to mappend multiple getLine statements together. Let's see what happens:>>> getLine -- Reads one line of inputHello"Hello">>> getLine <> getLineABCDEF"ABCDEF">>> getLine <> getLine <> getLine123456"123456"Neat! When we combine multiple commands we combine their effects and their results.Of course, we don't have to limit ourselves to reading strings. We can use readLn from the Prelude to read in anything that implements the Read type class:-- Parse a `Read`able value from one line of stdinreadLn :: Read a => IO aAll we have to do is tell the compiler which type a we intend to Read by providing a type signature:>>> readLn :: IO (Int, Int)(1, 2)(1 ,2)>>> readLn <> readLn :: IO (Int, Int)(1,2)(3,4)(4,6)>>> readLn <> readLn <> readLn :: IO (Int, Int)(1,2)(3,4)(5,6)(9,12)This works because:Int is a MonoidTherefore, (Int, Int) is a MonoidTherefore, IO (Int, Int) is a MonoidOr let's flip things around and nest IO actions inside of a tuple:>>> let ios = (getLine, readLn) :: (IO String, IO (Int, Int))>>> let (getLines, readLns) = ios <> ios <> ios>>> getLines123456123456>>> readLns(1,2)(3,4)(5,6)(9,12)We can very easily reason that the type (IO String, IO (Int, Int)) obeys the Monoid laws because:String is a MonoidIf String is a Monoid then IO String is also a MonoidInt is a MonoidIf Int is a Monoid, then (Int, Int) is also a `MonoidIf (Int, Int) is a Monoid, then IO (Int, Int) is also a MonoidIf IO String is a Monoid and IO (Int, Int) is a Monoid, then (IO String, IO (Int, Int)) is also a MonoidHowever, we don't really have to reason about this at all. The compiler will automatically assemble the correct Monoid instance for us. The only thing we need to verify is that the primitive Monoid instances obey the Monoid laws, and then we can trust that any larger Monoid instance the compiler derives will also obey the Monoid laws.The Unit MonoidHaskell Prelude also provides the putStrLn function, which echoes a String to standard output with a newline:putStrLn :: String -> IO ()Is putStrLn combinable? There's only one way to find out!>>> putStrLn "Hello" <> putStrLn "World"HelloWorldInteresting, but why does that work? Well, let's look at the types of the commands we are combining:putStrLn "Hello" :: IO ()putStrLn "World" :: IO ()Well, we said that IO b is a Monoid if b is a Monoid, and b in this case is () (pronounced "unit"), which you can think of as an "empty tuple". Therefore, () must form a Monoid of some sort, and if we dig into Data.Monoid, we will discover the following Monoid instance:-- Exercise: Prove the monoid laws for `()`instance Monoid () where mempty = () mappend () () = ()This says that empty tuples form a trivial Monoid, since there's only one possible value (ignoring bottom) for an empty tuple: (). Therefore, we can derive that IO () is a Monoid because () is a Monoid.FunctionsAlright, so we can combine putStrLn "Hello" with putStrLn "World", but can we combine naked putStrLn functions?>>> (putStrLn <> putStrLn) "Hello"HelloHelloWoah, how does that work?We never wrote a Monoid instance for the type String -> IO (), yet somehow the compiler magically accepted the above code and produced a sensible result.This works because of the following Monoid instance for functions:instance Monoid b => Monoid (a -> b) where mempty = \_ -> mempty mappend f g = \a -> mappend (f a) (g a)This says: "If b is a Monoid, then any function that returns a b is also a Monoid".The compiler then deduced that:() is a MonoidIf () is a Monoid, then IO () is also a MonoidIf IO () is a Monoid then String -> IO () is also a MonoidThe compiler is a trusted friend, deducing Monoid instances we never knew existed.Monoid pluginsNow we have enough building blocks to assemble a non-trivial example. Let's build a key logger with a Monoid-based plugin system.The central scaffold of our program is a simple main loop that echoes characters from standard input to standard output:main = do hSetEcho stdin False forever $ do c <- getChar putChar cHowever, we would like to intercept key strokes for nefarious purposes, so we will slightly modify this program to install a handler at the beginning of the program that we will invoke on every incoming character:install :: IO (Char -> IO ())install = ???main = do hSetEcho stdin False handleChar <- install forever $ do c <- getChar handleChar c putChar cNotice that the type of install is exactly the correct type to be a Monoid:() is a MonoidTherefore, IO () is also a MonoidTherefore Char -> IO () is also a MonoidTherefore IO (Char -> IO ()) is also a MonoidTherefore, we can combine key logging plugins together using Monoid operations. Here is one such example:type Plugin = IO (Char -> IO ())logTo :: FilePath -> PluginlogTo filePath = do handle <- openFile filePath WriteMode return (hPutChar handle)main = do hSetEcho stdin False handleChar <- logTo "file1.txt" <> logTo "file2.txt" forever $ do c <- getChar handleChar c putChar cNow, every key stroke will be recorded to both file1.txt and file2.txt. Let's confirm that this works as expected:$ ./loggerTestABC42$ cat file1.txtTestABC42$ cat file2.txtTestABC42Try writing your own Plugins and mixing them in with (<>) to see what happens. "Appendix C" contains the complete code for this section so you can experiment with your own Plugins.ApplicativesNotice that I never actually proved the Monoid laws for the following two Monoid instances:instance Monoid b => Monoid (a -> b) where mempty = \_ -> mempty mappend f g = \a -> mappend (f a) (g a)instance Monoid a => Monoid (IO a) where mempty = return mempty mappend io1 io2 = do a1 <- io1 a2 <- io2 return (mappend a1 a2)The reason why is that they are both special cases of a more general pattern. We can detect the pattern if we rewrite both of them to use the pure and liftA2 functions from Control.Applicative:import Control.Applicative (pure, liftA2)instance Monoid b => Monoid (a -> b) where mempty = pure mempty mappend = liftA2 mappendinstance Monoid b => Monoid (IO b) where mempty = pure mempty mappend = liftA2 mappendThis works because both IO and functions implement the following Applicative interface:class Functor f => Applicative f where pure :: a -> f a (<*>) :: f (a -> b) -> f a -> f b-- Lift a binary function over the functor `f`liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f cliftA2 f x y = (pure f <*> x) <*> y... and all Applicative instances must obey several Applicative laws:pure id <*> v = v((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)pure f <*> pure x = pure (f x)u <*> pure x = pure (\f -> f y) <*> uThese laws may seem a bit adhoc, but this paper explains that you can reorganize the Applicative class to this equivalent type class:class Functor f => Monoidal f where unit :: f () (#) :: f a -> f b -> f (a, b)Then the corresponding laws become much more symmetric:fmap snd (unit # x) = x -- Left identityfmap fst (x # unit) = x -- Right identityfmap assoc ((x # y) # z) = x # (y # z) -- Associativity where assoc ((a, b), c) = (a, (b, c))fmap (f *** g) (x # y) = fmap f x # fmap g y -- Naturality where (f *** g) (a, b) = (f a, g b)I personally prefer the Monoidal formulation, but you go to war with the army you have, so we will use the Applicative type class for this post.All Applicatives possess a very powerful property: they can all automatically lift Monoid operations using the following instance:instance (Applicative f, Monoid b) => Monoid (f b) where mempty = pure mempty mappend = liftA2 mappendThis says: "If f is an Applicative and b is a Monoid, then f b is also a Monoid." In other words, we can automatically extend any existing Monoid with some new feature f and get back a new Monoid.Note: The above instance is bad Haskell because it overlaps with other type class instances. In practice we have to duplicate the above code once for each Applicative. Also, for some Applicatives we may want a different Monoid instance.We can prove that the above instance obeys the Monoid laws without knowing anything about f and b, other than the fact that f obeys the Applicative laws and b obeys the Applicative laws. These proofs are a little long, so I've included them in Appendix B.Both IO and functions implement the Applicative type class:instance Applicative IO where pure = return iof <*> iox = do f <- iof x <- iox return (f x)instance Applicative ((->) a) where pure x = \_ -> x kf <*> kx = \a -> let f = kf a x = kx a in f xThis means that we can kill two birds with one stone. Every time we prove the Applicative laws for some functor F:instance Applicative F where ...... we automatically prove that the following Monoid instance is correct for free:instance Monoid b => Monoid (F b) where mempty = pure mempty mappend = liftA2 mappendIn the interest of brevity, I will skip the proofs of the Applicative laws, but I may cover them in a subsequent post.The beauty of Applicative Functors is that every new Applicative instance we discover adds a new building block to our Monoid toolbox, and Haskell programmers have already discovered lots of Applicative Functors.Revisiting tuplesOne of the very first Monoid instances we wrote was:instance (Monoid a, Monoid b) => Monoid (a, b) where mempty = (mempty, mempty) mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)Check this out:instance (Monoid a, Monoid b) => Monoid (a, b) where mempty = pure mempty mappend = liftA2 mappendThis Monoid instance is yet another special case of the Applicative pattern we just covered!This works because of the following Applicative instance in Control.Applicative:instance Monoid a => Applicative ((,) a) where pure b = (mempty, b) (a1, f) <*> (a2, x) = (mappend a1 a2, f x)This instance obeys the Applicative laws (proof omitted), so our Monoid instance for tuples is automatically correct, too.Composing applicativesIn the very first section I wrote:Haskell programmers prove large-scale properties exactly the same way we build large-scale programs:We build small proofs that we can verify correct in isolationWe compose smaller proofs into larger proofsI don't like to use the word compose lightly. In the context of category theory, compose has a very rigorous meaning, indicating composition of morphisms in some category. This final section will show that we can actually compose Monoid proofs in a very rigorous sense of the word.We can define a category of Monoid proofs:Objects are types and their associated Monoid proofsMorphisms are Applicative FunctorsThe identity morphism is the Identity applicativeThe composition operation is composition of Applicative FunctorsThe category laws are isomorphisms instead of equalitiesSo in our Plugin example, we began on the proof that () was a Monoid and then composed three Applicative morphisms to prove that Plugin was a Monoid. I will use the following diagram to illustrate this:+-----------------------+| || Legend: * = Object || || v || | = Morphism || v || |+-----------------------+* `()` is a `Monoid`v| IOv* `IO ()` is a `Monoid`v| ((->) String)v* `String -> IO ()` is a `Monoid`v| IOv* `IO (String -> IO ())` (i.e. `Plugin`) is a `Monoid`Therefore, we were literally composing proofs together.ConclusionYou can equationally reason at scale by decomposing larger proofs into smaller reusable proofs, the same way we decompose programs into smaller and more reusable components. There is no limit to how many proofs you can compose together, and therefore there is no limit to how complex of a program you can tame using equational reasoning.This post only gave one example of composing proofs within Haskell. The more you learn the language, the more examples of composable proofs you will encounter. Another common example is automatically deriving Monad proofs by composing monad transformers.As you learn Haskell, you will discover that the hard part is not proving things. Rather, the challenge is learning how to decompose proofs into smaller proofs and you can cultivate this skill by studying category theory and abstract algebra. These mathematical disciplines teach you how to extract common and reusable proofs and patterns from what appears to be disposable and idiosyncratic code.Appendix A - Missing Monoid instancesThese Monoid instance from this post do not actually appear in the Haskell standard library:instance Monoid b => Monoid (IO b)instance Monoid IntThe first instance was recently proposed here on the Glasgow Haskell Users mailing list. However, in the short term you can work around it by writing your own Monoid instances by hand just by inserting a sufficient number of pures and liftA2s.For example, suppose we wanted to provide a Monoid instance for Plugin. We would just newtype Plugin and write:newtype Plugin = Plugin { install :: IO (String -> IO ()) }instance Monoid Plugin where mempty = Plugin (pure (pure (pure mempty))) mappend (Plugin p1) (Plugin p2) = Plugin (liftA2 (liftA2 (liftA2 mappend)) p1 p2)This is what the compiler would have derived by hand.Alternatively, you could define an orphan Monoid instance for IO, but this is generally frowned upon.There is no default Monoid instance for Int because there are actually two possible instances to choose from:-- Alternative #1instance Monoid Int where mempty = 0 mappend = (+)-- Alternative #2instance Monoid Int where mempty = 1 mappend = (*)So instead, Data.Monoid sidesteps the issue by providing two newtypes to distinguish which instance we prefer:newtype Sum a = Sum { getSum :: a }instance Num a => Monoid (Sum a)newtype Product a = Product { getProduct :: a}instance Num a => Monoid (Product a)An even better solution is to use a semiring, which allows two Monoid instances to coexist for the same type. You can think of Haskell's Num class as an approximation of the semiring class:class Num a where fromInteger :: Integer -> a (+) :: a -> a -> a (*) :: a -> a -> a -- ... and other operations unrelated to semiringsNote that we can also lift the Num class over the Applicative class, exactly the same way we lifted the Monoid class. Here's the code:instance (Applicative f, Num a) => Num (f a) where fromInteger n = pure (fromInteger n) (+) = liftA2 (+) (*) = liftA2 (*) (-) = liftA2 (-) negate = fmap negate abs = fmap abs signum = fmap signumThis lifting guarantees that if a obeys the semiring laws then so will f a. Of course, you will have to specialize the above instance to every concrete Applicative because otherwise you will get overlapping instances.Appendix BThese are the proofs to establish that the following Monoid instance obeys the Monoid laws:instance (Applicative f, Monoid b) => Monoid (f b) where mempty = pure mempty mappend = liftA2 mappend... meaning that if f obeys the Applicative laws and b obeys the Monoid laws, then f b also obeys the Monoid laws.Proof of the left identity law:mempty <> x-- x <> y = mappend x y= mappend mempty x-- mappend = liftA2 mappend= liftA2 mappend mempty x-- mempty = pure mempty= liftA2 mappend (pure mempty) x-- liftA2 f x y = (pure f <*> x) <*> y= (pure mappend <*> pure mempty) <*> x-- Applicative law: pure f <*> pure x = pure (f x)= pure (mappend mempty) <*> x-- Eta conversion= pure (\a -> mappend mempty a) <*> x-- mappend mempty x = x= pure (\a -> a) <*> x-- id = \x -> x= pure id <*> x-- Applicative law: pure id <*> v = v= xProof of the right identity law:x <> mempty = x-- x <> y = mappend x y= mappend x mempty-- mappend = liftA2 mappend= liftA2 mappend x mempty-- mempty = pure mempty= liftA2 mappend x (pure mempty)-- liftA2 f x y = (pure f <*> x) <*> y= (pure mappend <*> x) <*> pure mempty-- Applicative law: u <*> pure y = pure (\f -> f y) <*> u= pure (\f -> f mempty) <*> (pure mappend <*> x)-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= ((pure (.) <*> pure (\f -> f mempty)) <*> pure mappend) <*> x-- Applicative law: pure f <*> pure x = pure (f x)= (pure ((.) (\f -> f mempty)) <*> pure mappend) <*> x-- Applicative law : pure f <*> pure x = pure (f x)= pure ((.) (\f -> f mempty) mappend) <*> x-- `(.) f g` is just prefix notation for `f . g`= pure ((\f -> f mempty) . mappend) <*> x-- f . g = \x -> f (g x)= pure (\x -> (\f -> f mempty) (mappend x)) <*> x-- Apply the lambda= pure (\x -> mappend x mempty) <*> x-- Monoid law: mappend x mempty = x= pure (\x -> x) <*> x-- id = \x -> x= pure id <*> x-- Applicative law: pure id <*> v = v= xProof of the associativity law:(x <> y) <> z-- x <> y = mappend x y= mappend (mappend x y) z-- mappend = liftA2 mappend= liftA2 mappend (liftA2 mappend x y) z-- liftA2 f x y = (pure f <*> x) <*> y= (pure mappend <*> ((pure mappend <*> x) <*> y)) <*> z-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= (((pure (.) <*> pure mappend) <*> (pure mappend <*> x)) <*> y) <*> z-- Applicative law: pure f <*> pure x = pure (f x)= ((pure f <*> (pure mappend <*> x)) <*> y) <*> z where f = (.) mappend-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= ((((pure (.) <*> pure f) <*> pure mappend) <*> x) <*> y) <*> z where f = (.) mappend-- Applicative law: pure f <*> pure x = pure (f x)= (((pure f <*> pure mappend) <*> x) <*> y) <*> z where f = (.) ((.) mappend)-- Applicative law: pure f <*> pure x = pure (f x)= ((pure f <*> x) <*> y) <*> z where f = (.) ((.) mappend) mappend-- (.) f g = f . g= ((pure f <*> x) <*> y) <*> z where f = ((.) mappend) . mappend-- Eta conversion= ((pure f <*> x) <*> y) <*> z where f x = (((.) mappend) . mappend) x-- (f . g) x = f (g x)= ((pure f <*> x) <*> y) <*> z where f x = (.) mappend (mappend x)-- (.) f g = f . g= ((pure f <*> x) <*> y) <*> z where f x = mappend . (mappend x)-- Eta conversion= ((pure f <*> x) <*> y) <*> z where f x y = (mappend . (mappend x)) y-- (f . g) x = f (g x)= ((pure f <*> x) <*> y) <*> z where f x y = mappend (mappend x y)-- Eta conversion= ((pure f <*> x) <*> y) <*> z where f x y z = mappend (mappend x y) z-- Monoid law: mappend (mappend x y) z = mappend x (mappend y z)= ((pure f <*> x) <*> y) <*> z where f x y z = mappend x (mappend y z)-- (f . g) x = f (g x)= ((pure f <*> x) <*> y) <*> z where f x y z = (mappend x . mappend y) z-- Eta conversion= ((pure f <*> x) <*> y) <*> z where f x y = mappend x . mappend y-- (.) f g = f . g= ((pure f <*> x) <*> y) <*> z where f x y = (.) (mappend x) (mappend y)-- (f . g) x = f= ((pure f <*> x) <*> y) <*> z where f x y = (((.) . mappend) x) (mappend y)-- (f . g) x = f (g x)= ((pure f <*> x) <*> y) <*> z where f x y = ((((.) . mappend) x) . mappend) y-- Eta conversion= ((pure f <*> x) <*> y) <*> z where f x = (((.) . mappend) x) . mappend-- (.) f g = f . g= ((pure f <*> x) <*> y) <*> z where f x = (.) (((.) . mappend) x) mappend-- Lambda abstraction= ((pure f <*> x) <*> y) <*> z where f x = (\k -> k mappend) ((.) (((.) . mappend) x))-- (f . g) x = f (g x)= ((pure f <*> x) <*> y) <*> z where f x = (\k -> k mappend) (((.) . ((.) . mappend)) x)-- Eta conversion= ((pure f <*> x) <*> y) <*> z where f = (\k -> k mappend) . ((.) . ((.) . mappend))-- (.) f g = f . g= ((pure f <*> x) <*> y) <*> z where f = (.) (\k -> k mappend) ((.) . ((.) . mappend))-- Applicative law: pure f <*> pure x = pure (f x)= (((pure g <*> pure f) <*> x) <*> y) <*> z where g = (.) (\k -> k mappend) f = (.) . ((.) . mappend)-- Applicative law: pure f <*> pure x = pure (f x)= ((((pure (.) <*> pure (\k -> k mappend)) <*> pure f) <*> x) <*> y) <*> z where f = (.) . ((.) . mappend)-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= ((pure (\k -> k mappend) <*> (pure f <*> x)) <*> y) <*> z where f = (.) . ((.) . mappend)-- u <*> pure y = pure (\k -> k y) <*> u= (((pure f <*> x) <*> pure mappend) <*> y) <*> z where f = (.) . ((.) . mappend)-- (.) f g = f . g= (((pure f <*> x) <*> pure mappend) <*> y) <*> z where f = (.) (.) ((.) . mappend)-- Applicative law: pure f <*> pure x = pure (f x)= ((((pure g <*> pure f) <*> x) <*> pure mappend) <*> y) <*> z where g = (.) (.) f = (.) . mappend-- Applicative law: pure f <*> pure x = pure (f x)= (((((pure (.) <*> pure (.)) <*> pure f) <*> x) <*> pure mappend) <*> y) <*> z where f = (.) . mappend-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= (((pure (.) <*> (pure f <*> x)) <*> pure mappend) <*> y) <*> z where f = (.) . mappend-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= ((pure f <*> x) <*> (pure mappend <*> y)) <*> z where f = (.) . mappend-- (.) f g = f . g= ((pure f <*> x) <*> (pure mappend <*> y)) <*> z where f = (.) (.) mappend-- Applicative law: pure f <*> pure x = pure (f x)= (((pure f <*> pure mappend) <*> x) <*> (pure mappend <*> y)) <*> z where f = (.) (.)-- Applicative law: pure f <*> pure x = pure (f x)= ((((pure (.) <*> pure (.)) <*> pure mappend) <*> x) <*> (pure mappend <*> y)) <*> z-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= ((pure (.) <*> (pure mappend <*> x)) <*> (pure mappend <*> y)) <*> z-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= (pure mappend <*> x) <*> ((pure mappend <*> y) <*> z)-- liftA2 f x y = (pure f <*> x) <*> y= liftA2 mappend x (liftA2 mappend y z)-- mappend = liftA2 mappend= mappend x (mappend y z)-- x <> y = mappend x y= x <> (y <> z)Appendix C: Monoid key loggingHere is the complete program for a key logger with a Monoid-based plugin system:import Control.Applicative (pure, liftA2)import Control.Monad (forever)import Data.Monoidimport System.IOinstance Monoid b => Monoid (IO b) where mempty = pure mempty mappend = liftA2 mappendtype Plugin = IO (Char -> IO ())logTo :: FilePath -> PluginlogTo filePath = do handle <- openFile filePath WriteMode return (hPutChar handle)main = do hSetEcho stdin False handleChar <- logTo "file1.txt" <> logTo "file2.txt" forever $ do c <- getChar handleChar c putChar c

Similarity analysis of quilt blocks

Planet Haskell - 18 hours 15 min ago
As I've discussed elsewhere, I once wrote a program to enumerate all the possible quilt blocks of a certain type. The quilt blocks in question are, in quilt jargon, sixteen-patch half-square triangles. A half-square triangle, also called a “patch”, is two triangles of fabric sewn together, like this: Then you sew four of these patches into a four-patch, say like this: Then to make a sixteen-patch block of the type I was considering, you take four identical four-patch blocks, and sew them together with rotational symmetry, like this: It turns out that there are exactly 72 different ways to do this. (Blocks equivalent under a reflection are considered the same, as are blocks obtained by exchanging the roles of black and white, which are merely stand-ins for arbitrary colors to be chosen later.) Here is the complete set of 72: It's immediately clear that some of these resemble one another, sometimes so strongly that it can be hard to tell how they differ, while others are very distinctive and unique-seeming. I wanted to make the computer classify the blocks on the basis of similarity. My idea was to try to find a way to get the computer to notice which blocks have distinctive components of one color. For example, many blocks have a distinctive diamond shape in the center. Some have a pinwheel like this: which also has the diamond in the middle, while others have a different kind of pinwheel with no diamond: I wanted to enumerate such components and ask the computer to list which blocks contained which shapes; then group them by similarity, the idea being that that blocks with the same distinctive components are similar. The program suite uses a compact notation of blocks and of shapes that makes it easy to figure out which blocks contain which distinctive components. Since each block is made of four identical four-patches, it's enough just to examine the four-patches. Each of the half-square triangle patches can be oriented in two ways:    Here are two of the 12 ways to orient the patches in a four-patch:    Each 16-patch is made of four four-patches, and you must imagine that the four-patches shown above are in the upper-left position in the 16-patch. Then symmetry of the 16-patch block means that triangles with the same label are in positions that are symmetric with respect to the entire block. For example, the two triangles labeled b are on opposite sides of the block's northwest-southeast diagonal. But there is no symmetry of the full 16-patch block that carries triangle d to triangle g, because d is on the edge of the block, while g is in the interior. Triangles must be colored opposite colors if they are part of the same patch, but other than that there are no constraints on the coloring. A block might, of course, have patches in both orientations: All the blocks with diagonals oriented this way are assigned descriptors made from the letters bbdefgii. Once you have chosen one of the 12 ways to orient the diagonals in the four-patch, you still have to color the patches. A descriptor like bbeeffii describes the orientation of the diagonal lines in the squares, but it does not describe the way the four patches are colored; there are between 4 and 8 ways to color each sort of four-patch. For example, the bbeeffii four-patch shown earlier can be colored in six different ways:                In each case, all four diagonals run from northwest to southeast. (All other ways of coloring this four-patch are equivalent to one of these under one or more of rotation, reflection, and exchange of black and white.) We can describe a patch by listing the descriptors of the eight triangles, grouped by which triangles form connected regions. For example, the first block above is:    b/bf/ee/fi/i because there's an isolated white b triangle, then a black parallelogram made of a b and an f patch, then a white triangle made from the two white e triangles then another parallelogram made from the black f and i, and finally in the middle, the white i. (The two white e triangles appear to be separated, but when four of these four-patches are joined into a 16-patch block, the two white e patches will be adjacent and will form a single large triangle: ) The other five bbeeffii four-patches are, in the same order they are shown above: b/b/e/e/f/f/i/i b/b/e/e/fi/fi b/bfi/ee/f/i bfi/bfi/e/e bf/bf/e/e/i/i All six have bbeeffii, but grouped differently depending on the colorings. The second one ( b/b/e/e/f/f/i/i) has no regions with more than one triangle; the fifth ( bfi/bfi/e/e) has two large regions of three triangles each, and two isolated triangles. In the latter four-patch, the bfi in the descriptor has three letters because the patch has a corresponding distinctive component made of three triangles. I made up a list of the descriptors for all 72 blocks; I think I did this by hand. (The work directory contains a blocks file that maps blocks to their descriptors, but the Makefile does not say how to build it, suggesting that it was not automatically built.) From this list one can automatically extract a list of descriptors of interesting shapes: an interesting shape is two or more letters that appear together in some descriptor. (Or it can be the single letter j, which is exceptional; see below.) For example, bffh represents a distinctive component. It can only occur in a patch that has a b, two fs, and an h, like this one: and it will only be significant if the b, the two fs, and the h are the same color: in which case you get this distinctive and interesting-looking hook component. There is only one block that includes this distinctive hook component; it has descriptor b/bffh/ee/j, and looks like this: . But some of the distinctive components are more common. The ee component represents the large white half-diamonds on the four sides. A block with "ee" in its descriptor always looks like this: and the blocks formed from such patches always have a distinctive half-diamond component on each edge, like this: (The stippled areas vary from block to block, but the blocks with ee in their descriptors always have the half-diamonds as shown.) The blocks listed at http://hop.perl.plover.com/quilt/analysis/images/ee.html all have the ee component. There are many differences between them, but they all have the half-diamonds in common. Other distinctive components have similar short descriptors. The two pinwheels I mentioned above are gh and fi, respectively; if you look at the list of gh blocks and the list of fi blocks you'll see all the blocks with each kind of pinwheel. Descriptor j is an exception. It makes an interesting shape all by itself, because any block whose patches have j in their descriptor will have a distinctive-looking diamond component in the center. The four-patch looks like this: so the full sixteen-patch looks like this: where the stippled parts can vary. A look at the list of blocks with component j will confirm that they all have this basic similarity. I had made a list of the descriptors for each of the the 72 blocks, and from this I extracted a list of the descriptors for interesting component shapes. Then it was only a matter of finding the component descriptors in the block descriptors to know which blocks contained which components; if the two blocks share two different distinctive components, they probably look somewhat similar. Then I sorted the blocks into groups, where two blocks were in the same group if they shared two distinctive components. The resulting grouping lists, for each block, which other blocks have at least two shapes in common with it. Such blocks do indeed tend to look quite similar. This strategy was actually the second thing I tried; the first thing didn't work out well. (I forget just what it was, but I think it involved finding polygons in each block that had white inside and black outside, or vice versa.) I was satisfied enough with this second attempt that I considered the project a success and stopped work on it. The complete final results were: This tabulation of blocks that are somewhat similar This tabulation of blocks that are distinctly similar (This is the final product; I consider this a sufficiently definitive listing of “similar blocks”.) This tabulation of blocks that are extremely similar And these tabulations of all the blocks with various distinctive components: bd bf bfh bfi cd cdd cdf cf cfi ee eg egh egi fgh fh fi gg ggh ggi gh gi j It may also be interesting to browse the work directory.

Fun with (Kalman) Filters Part I

Planet Haskell - 18 hours 15 min ago
Suppose we wish to estimate the mean of a sample drawn from a normal distribution. In the Bayesian approach, we know the prior distribution for the mean (it could be a non-informative prior) and then we update this with our … Continue reading →

A Tutorial on Church Representations

Planet Haskell - 18 hours 15 min ago
Posted on July 19, 2014 I’ve written a few times about church representations, but never aimed at someone who’d never heard of what a church representation is. In fact, it doesn’t really seem like too many people have! In this post I’d like to fix that :) What is a Church Representation Simply put, a church representation (CR) is a way of representing a piece of concrete data with a function. The CR can be used through an identical way to the concrete data, but it’s comprised entirely of functions. They where originally described by Alanzo Church as a way of modeling all data in lambda calculus, where all we have is functions. Tuples The simplest CR I’ve found is that of a tuples. Let’s first look at our basic tuple API type Tuple a b = ... mkTuple :: a -> b -> Tuple a b fst :: Tuple a b -> a snd :: Tuple a b -> b Now this is trivially implemented with (,) type Tuple a b = (a, b) mkTuple = (,) fst = Prelude.fst snd = Prelude.snd The church representation preserves the interface, but changes all the underlying implementations. type Tuple a b = forall c. (a -> b -> c) -> c There’s our church pair, notice that it’s only comprised of ->. It also makes use of higher rank types. This means that a Tuple a b can be applied to function producing any c and it must return something of that type. Let’s look at how the rest of our API is implemented mkTuple a b = \f -> f a b fst tup = tup (\a _ -> a) snd tup = tup (\_ b -> b) And that’s it! It’s helpful to step through some reductions here fst (mkTuple 1 2) fst (\f -> f 1 2) (\f -> f 1 2) (\a _ -> a) (\a _ -> a) 1 2 1 And for snd snd (mkTuple True False) fst (\f -> f True False) (\f -> f True False) (\_ b -> b) (\_ b -> b) True false False So we can see that these are clearly morally equivalent. The only real question here is whether, for each CR tuple there exists a normal tuple. This isn’t immediately apparent since the function type for the CR looks a lot more general. In fact, the key to this proof lies in the forall c part, this extra polymorphism let’s us use a powerful technique called “parametricity” to prove that they’re equivalent. I won’t actually go into such a proof now since it’s not entirely relevant, but it’s worth noting that both (,) and Tuple are completely isomorphic. To convert between them is pretty straightforward isoL :: Tuple a b -> (a, b) isoL tup = tup (,) isoR :: (a, b) -> Tuple a b isoR (a, b) = \f -> f a b Now that we have an idea of how to church representations “work” let’s go through a few more examples to start to see a pattern. Booleans Booleans have the simplest API of all type Boolean = ... true :: Boolean false :: Boolean test :: Boolean -> a -> a -> a We can build all other boolean operations on test a && b = test a b false a || b = test a true b when t e = test t e (return ()) This API is quite simple to implement with Bool, type Boolean = Bool true = True false = False test b t e = if b then t else e But how could we represent this with functions? The answer stems from test, type Boolean = forall a. a -> a -> a Clever readers will notice this is almost identical to test, a boolean get’s two arguments and returns one or the other. true = \a _ -> a false = \_ b -> b test b t e = b t e We can write an isomorphism between Bool and Boolean as well isoL :: Bool -> Boolean isoL b = if b then true else false isoR :: Boolean -> Bool isoR b = test b True False Lists Now let’s talk about lists. One of the interesting things is lists are the first recursive data type we’ve dealt with so far. Defining the API for lists isn’t entirely clear either. We want a small set of functions that can easily cover any conceivable operations for a list. The simplest way to do this is to realize that we can do exactly 3 things with lists. Make an empty list Add a new element to the front of an existing list Pattern match on them We can represent this with 3 functions type List a = ... nil :: List a cons :: a -> List a -> List a match :: List a -> b -> (a -> List a -> b) -> b If match looks confusing just remember that f list = match list g h Is really the same as f [] = g f (x : xs) = h x xs In this way match is just the pure functional version of pattern matching. We can actually simplify the API by realizing that rather than this awkward match construct, we can use something cleaner. foldr forms a much more pleasant API to work with since it’s really the most primitive form of “recursing” on a list. match :: List a -> (a -> List a -> b) -> b -> b match list f b = fst $ foldr list worker (b, nil) where worker x (b, xs) = (f x xs, cons x xs) The especially nice thing about foldr is that it doesn’t mention List a in its two “destruction” functions, all the recursion is handled in the implementation. We can implement CR lists trivially using foldr type List a = forall b. (a -> b -> b) -> b -> b nil = \ _ nil -> nil cons x xs = \ cons nil -> x `cons` xs cons nil foldr list cons nil = list cons nil Notice that we handle the recursion in the list type by having a b as an argument? This is similar to how the accumulator to foldr gets the processed tail of the list. This is a common technique for handling recursion in our church representations. Last but not least, the isomorphism arises from foldr (:) [], isoL :: List a -> [a] isoL l = l (:) [] isoR :: [a] -> List a isoR l f z = foldr f z l Either The last case that we’ll look at is Either. Like Pair, Either has 3 different operations. type Or a b = ... inl :: a -> Or a b inr :: b -> Or a b or :: Or a b -> (a -> c) -> (b -> c) -> c This is pretty easy to implement with Either type Or a b = Either a b inl = Left inr = Right or (Left a) f g = f a or (Right b) f g = g b Once again, the trick to encoding this as a function falls right out of the API. In this case we use the type of or type Or a b = forall c. (a -> c) -> (b -> c) -> c inl a = \f g -> f a inr b = \f g -> g a or x = x Last but not least, let’s quickly rattle off our isomorphism. isoL :: Or a b -> Either a b isoL o = o Left Right isoR o :: Either a b -> Or a b isoR o = or o The Pattern So now we can talk about the underlying pattern in CRs. First remember that for any type T, we have a list of n distinct constructors T1 T2 T3…Tn. Each of the constructors has a m fields T11, T12, T13… Now the church representation of such a type T is forall c. (T11 -> T12 -> T13 -> .. -> c) -> (T21 -> T22 -> T23 -> .. -> c) ... -> (Tn1 -> Tn2 -> Tn3 -> .. -> c) -> c This pattern doesn’t map quite as nicely to recursive types. Here we have to take the extra step of substituting all occurrences of T for c in our resulting church representation. This is actually such a pleasant pattern to work with that I’ve written a library for automatically reifying a type between its church representation and concrete form. Wrap Up Hopefully you now understand what a church representation is. It’s worth noting that a lot of stuff Haskellers stumble upon daily are really church representations in disguise. My favorite example is maybe, this function takes a success and failure continuation with a Maybe and produces a value. With a little bit of imagination, one can realize that this is really just a function mapping a Maybe to a church representation! If you’re thinking that CRs are pretty cool! Now might be a time to take a look at one of my previous posts on deriving them automagically. /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'codeco'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); Please enable JavaScript to view the comments powered by Disqus. comments powered by Disqus 2014-07-19T00:00:00Z

[gggpgqye] Narrow type signatures which can be widened

Planet Haskell - 18 hours 15 min ago
Create a tool to find type signatures that are less polymorphic than would be inferred by type inference.This is a solution in search of a problem.

On uninhabited types and inconsistent logics

Planet Haskell - 18 hours 15 min ago
Earlier this week I gave a talk about the Curry-Howard isomorphism. Talks never go quite the way you expect. The biggest sticking point was my assertion that there is no function with the type a → b. I mentioned this as a throwaway remark on slide 7, assuming that everyone would agree instantly, and then we got totally hung up on it for about twenty minutes. Part of this was my surprise at discovering that most of the audience (members of the Philly Lambda functional programming group) was not familiar with the Haskell type system. I had assumed that most of the members of a functional programming interest group would be familiar with one of Haskell, ML, or Scala, all of which have the same basic type system. But this was not the case. (Many people are primarily interested in Scheme, for example.) I think the main problem was that I did not make clear to the audience what Haskell means when it says that a function has type a → b. At the talk, and then later on Reddit people asked what about a function that takes an integer and returns a string: doesn't it have type a → b? If you know one of the HM languages, you know that of course it doesn't; it has type Int → String, which is not the same at all. But I wasn't prepared for this confusion and it took me a while to formulate the answer. I think I underestimated the degree to which I have internalized the behavior of Hindley-Milner type systems after twenty years. Next time, I will be better prepared, and will say something like the following: A function which takes an integer and returns a string does not have the type a → b; it has the type Int → String. You must pass it an integer, and you may only use its return value in a place that makes sense for a string. If f has this type, then 3 + f 4 is a compile-time type error because Haskell knows that f returns a string, and strings do not work with +. But if f had the type a → b, then 3 + f 4 would be legal, because context requires that f return a number, and the type a → b says that it can return a number, because a number is an instance of the completely general type b. The type a → b, in contrast to Int → String, means that b and a are completely unconstrained. Say function f had type a → b. Then you would be able to use the expression f x in any context that was expecting any sort of return value; you could write any or all of: 3 + f x head(f x) "foo" ++ f x True && f x and they would all type check correctly, regardless of the type of x. In the first line, f x would return a number; in the second line f would return a list; in the third line it would return a string, and in the fourth line it would return a boolean. And in each case f could be able to do what was required regardless of the type of x, so without even looking at x. But how could you possibly write such a function f? You can't; it's impossible. Contrast this with the identity function id, which has type a → a. This says that id always returns a value whose type is the same as that if its argument. So you can write 3 + id x as long as x has the right type for +, and you can write head(id x) as long as x has the right type for head, and so on. But for f to have the type a → b, all those would have to work regardless of the type of the argument to f. And there is no way to write such an f. Actually I wonder now if part of the problem is that we like to write a → b when what we really mean is the type ∀a.∀b.a → b. Perhaps making the quantifiers explicit would clear things up? I suppose it probably wouldn't have, at least in this case. The issue is a bit complicated by the fact that the function loop :: a -> b loop x = loop x does have the type a → b, and, in a language with exceptions, throw has that type also; or consider Haskell foo :: a -> b foo x = undefined Unfortunately, just as I thought I was getting across the explanation of why there can be no function with type a → b, someone brought up exceptions and I had to mutter and look at my shoes. (You can also take the view that these functions have type a → ⊥, but the logical principle ⊥ → b is unexceptionable.) In fact, experienced practitioners will realize, the instant the type a → b appears, that they have written a function that never returns. Such an example was directly responsible for my own initial interest in functional programming and type systems; I read a 1992 paper (“An anecdote about ML type inference”) by Andrew R. Koenig in which he described writing a merge sort function, whose type was reported (by the SML type inferencer) as [a] -> [b], and the reason was that it had a bug that would cause it to loop forever on any nonempty list. I came back from that conference convinced that I must learn ML, and Higher-Order Perl was a direct (although distant) outcome of that conviction. Any discussion of the Curry-Howard isomorphism, using Haskell as an example, is somewhat fraught with trouble, because Haskell's type logic is utterly inconsistent. In addition to the examples above, in Haskell one can write fix :: (a -> a) -> a fix f = let x = fix f in f x and as a statement of logic, is patently false. This might be an argument in favor of the Total Functional Programming suggested by D.A. Turner and others.

I Don't Hear You Talking: a silence on Silence Culture

Planet Haskell - 18 hours 15 min ago
A lot of ink has been spilt over trigger warnings lately. And I don't want to write about it because I feel like I don't have much to add to the conversation. But when I stop, that feeling nags at me. You can't think with your mouth open; and as someone who always had issues keeping her damn mouth shut, it took me a long time to learn that to listen you must be silent. ... And yet. ... And yet, when someone experiences strong emotions about her own marginalization, but feels compelled to self-silence: that's when you need to listen harder. Because there are a lot of voices I know full well, and I don't hear them talking. I know them because they're the voices of my friends, and among friends we talk about things we don't talk about. In the workaday world we put on our faces and never hint at the behemoths raging through our china cabinets. And when we let down our hair, those faces stay on, because you always know who might be listening. And behind closed doors, still, we keep them on because elsewise love would be too tragic. But in secret spaces, we talk. We are, every one of us, damaged. I may not know who hurt you yet, I may not know your story of pain, but I never assume there isn't one; because every single person I've known, when we get close enough, they tell me things we don't talk about. Sometimes it takes years before they feel safe enough, sometimes they never feel safe enough, but if they've ever lowered their guard to me, they've told me. Every. single. person. We are born and raised and live in a world drenched in abuse. And that abuse doesn't leave scars, it leaves open wounds waiting to have dirt rubbed in them. The first rule of what doesn't happen is that it cannot be spoken of. So healing only happens in those secret spaces, one-on-one, in the dark of night, far far from friends and strangers alike. This privatization of healing only compounds the problem of abuse. When we cannot see past others' faces, when we cannot see the living wounds they bear, when we do not hear their daily resistance against reiterations of violence, we come to think that somehow maybe they haven't been hurt as badly as we. When we see our own people succeed, or see leaders of resistance and "survivors" and "healed" voices speaking up against the injustice of the world, we think that somehow maybe they must be stronger than us, more resilient than us, more determined than us. When we cannot witness their struggle, we think that somehow maybe when they go to bed at nights they need not take the time to scrub out that daily dirt from their wounds. And when we cannot bear that witness, we see ourselves as lesser, broken, impostors. These are the voices I do not hear speaking out, or hear speaking in only roundabout whispers. These are the voices for whom trigger warnings are writ. As so precisely put by Aoife, Here's something I need you to understand: the vast majority of students when 'triggered' don't write howlers to department heads or flip laptops over in crowded classrooms for YouTube counts. On the contrary, they most often shut down and collapse into numbness. That numbness, that collapse, is the last tool our minds have to keep our faces in place when some sudden shock reopens sore wounds. The second rule of what we do not talk about is that wounds never heal, not entirely. We —some of us— can manage not flinching when someone raises their hand. We —some of us— learn to laugh along when someone touches our back. We —some of us— learn to feel safe in a room alone with a man. We —some of us— learn to turn blind to the "tranny" jokes, to the blackface, to the jibes about trailer parks and country living, to the "sex" scene where she lay sleeping, the scene where he takes the other man 'round back, the man who slaps his wife, the mother who cuffs her child, being told to go pick a switch, to the child starving on the street, to the college kids playing "tricks" on the homeless. We —some of us— learn to live as stone. But stone don't heal, and we all have our rituals of self-care we won't talk about. But when everywhere all you ever see is stone, you know your flesh will never make it if the light still shines in your eyes. And I too am guilty of this silence culture. Because the fact of the matter is, in this day and age, to speak is to jeopardize my career. I can talk about being trans or being a dyke, and I can at least pretend that the laws on the books will mean a damn. But if I talk about my childhood, I won't be seen as an adult. If I talk about my abuse, I won't be seen as stable. If I bring up my mental life, I won't be seen as professional. If I talk about spoons, I won't be seen as reliable. And so I stuff it down and self-silence and hide what it's like, that daily living with depression and PTSD, til some trigger sets it off and out comes that rage which grows on silence. Some full-force punch to the gut, some words like "I'm not sure suicide is ever the answer" and my eyes go black, and words come out, and they sound nice enough, but every one means "I hate you". Not to be rude, but sometimes suicide is the answer. It may not be the best answer, but it is an answer. And, unfortunately, sometimes that is all that's required. Sometimes a terrible fucking answer is the only answer to be found. I say this as someone who's spent more of her life being suicidal than not, as someone who's survived multiple attempts, as someone whose friends have almost invariably spent years being suicidal. Yes, it sucks. And no, it doesn't "solve" anything. But think of the suffering of the victim. It is incredibly difficult to overcome the self-preservation instinct. Profoundly difficult. Imagine the volume of suffering it takes, the depths and duration of misery required to actively overcome the single most powerful compulsion any living creature can experience. There comes a point, long after endurance has already given out, when the full weight of that volume cannot be borne. Whenever this happens, my thoughts are always with the victim. I cannot help but empathize with that terrible terrible suffering Because the fact of the matter is, I'm too scared to talk. We live in a culture where suicide is "the easy way" and you're supposed to "take it like a man", but the fact of the matter is noone can take it. We are, every one of us, damaged. We privatize our healing because the first rule of abuse is that it must never be mentioned, must never never be discussed. The learning of silence is the first abuse: it is how we are taught to abuse ourselves, to never never hear that we're not alone. This isn't about suicide and depression. Isn't about rape and racism. Isn't about violence and neglect. This is about silence. About the words we don't use to not say what you can't talk about. This is about learning to speak using words. About how we must open our mouths in order to listen. Twitter Facebook Google+ Tumblr comments

Information content and allele frequency difference

Planet Haskell - 18 hours 15 min ago
ESI scores and allele frequency difference Just a quick note on the relationship between ESI scores and allele frequencies. Allele frequency differences is of course related to – perhaps even the definition of – diversification, but the information we gain from observing an allele also depends on the specific allele frequencies involved. The graph below shows how this is related. Each line represents a fixed allele difference, from 0.05 at the bottom, to 0.95 at the top, and the x-axis is the average allele frequency between populations. We see that for small differences, the actual frequencies matter little, but for moderate to large allele differences, allele frequencies near the extremes have a large effect. Note that this is information per allele, and thus not ESI (which is the expected information from observing the site, in other words a weighted average over all alleles). 2014-07-17T12:00:00Z

Guess what this does (solution)

Planet Haskell - 18 hours 15 min ago
A few weeks ago I asked people to predict, without trying it first, what this would print: perl -le 'print(two + two == five ? "true" : "false")' (If you haven't seen this yet, I recommend that you guess, and then test your guess, before reading the rest of this article.) People familiar with Perl guess that it will print true; that is what I guessed. The reasoning is as follows: Perl is willing to treat the unquoted strings two and five as strings, as if they had been quoted, and is also happy to use the + and == operators on them, converting the strings to numbers in its usual way. If the strings had looked like "2" and "5" Perl would have treated them as 2 and 5, but as they don't look like decimal numerals, Perl interprets them as zeroes. (Perl wants to issue a warning about this, but the warning is not enabled by default. Since the two and five are treated as zeroes, the result of the == comparison are true, and the string "true" should be selected and printed. So far this is a little bit odd, but not excessively odd; it's the sort of thing you expect from programming languages, all of which more or less suck. For example, Python's behavior, although different, is about equally peculiar. Although Python does require that the strings two and five be quoted, it is happy to do its own peculiar thing with "two" + "two" == "five", which happens to be false: in Python the + operator is overloaded and has completely different behaviors on strings and numbers, so that while in Perl "2" + "2" is the number 4, in Python is it is the string 22, and "two" + "two" yields the string "twotwo". Had the program above actually printed true, as I expected it would, or even false, I would not have found it remarkable. However, this is not what the program does do. The explanation of two paragraphs earlier is totally wrong. Instead, the program prints nothing, and the reason is incredibly convoluted and bizarre. First, you must know that print has an optional first argument. (I have plans for an article about how optional first argmuents are almost always a bad move, but contrary to my usual practice I will not insert it here.) In Perl, the print function can be invoked in two ways: print HANDLE $a, $b, $c, …; print $a, $b, $c, …; The former prints out the list $a, $b, $c, … to the filehandle HANDLE; the latter uses the default handle, which typically points at the terminal. How does Perl decide which of these forms is being used? Specifically, in the second form, how does it know that $a is one of the items to be printed, rather than a variable containing the filehandle to print to? The answer to this question is further complicated by the fact that the HANDLE in the first form could be either an unquoted string, which is the name of the handle to print to, or it could be a variable containing a filehandle value. Both of these prints should do the same thing: my $handle = \*STDERR; print STDERR $a, $b, $c; print $handle $a, $b, $c; Perl's method to decide whether a particular print uses an explicit or the default handle is a somewhat complicated heuristic. The basic rule is that the filehandle, if present, can be distinguished because its trailing comma is omitted. But if the filehandle were allowed to be the result of an arbitrary expression, it might be difficult for the parser to decide where there was a a comma; consider the hypothetical expression: print $a += EXPRESSION, $b $c, $d, $e; Here the intention is that the $a += EXPRESSION, $b expression calculates the filehandle value (which is actually retrieved from $b, the $a += … part being executed only for its side effect) and the remaining $c, $d, $e are the values to be printed. To allow this sort of thing would be way too confusing to both Perl and to the programmer. So there is the further rule that the filehandle expression, if present, must be short, either a simple scalar variable such as $fh, or a bare unqoted string that is in the right format for a filehandle name, such as `HANDLE``. Then the parser need only peek ahead a token or two to see if there is an upcoming comma. So for example, in print STDERR $a, $b, $c; the print is immediately followed by STDERR, which could be a filehandle name, and STDERR is not followed by a comma, so STDERR is taken to be the name of the output handle. And in print $x, $a, $b, $c; the print is immediately followed by the simple scalar value $x, but this $x is followed by a comma, so is considered one of the things to be printed, and the target of the print is the default output handle. In print STDERR, $a, $b, $c; Perl has a puzzle: STDERR looks like a filehandle, but it is followed by a comma. This is a compile-time error; Perl complains “No comma allowed after filehandle” and aborts. If you want to print the literal string STDERR, you must quote it, and if you want to print A, B, and C to the standard error handle, you must omit the first comma. Now we return the the original example. perl -le 'print(two + two == five ? "true" : "false")' Here Perl sees the unquoted string two which could be a filehandle name, and which is not followed by a comma. So it takes the first two to be the output handle name. Then it evaluates the expression + two == five ? "true" : "false" and obtains the value true. (The leading + is a unary plus operator, which is a no-op. The bare two and five are taken to be string constants, which, compared with the numeric == operator, are considered to be numerically zero, eliciting the same warning that I mentioned earlier that I had not enabled. Thus the comparison Perl actually does is is 0 == 0, which is true, and the resulting string is true.) This value, the string true, is then printed to the filehandle named two. Had we previously opened such a filehandle, say with open two, ">", "output-file"; then the output would have been sent to the filehandle as usual. Printing to a non-open filehandle elicits an optional warning from Perl, but as I mentioned, I have not enabled warnings, so the print silently fails, yielding a false value. Had I enabled those optional warnings, we would have seen a plethora of them: Unquoted string "two" may clash with future reserved word at -e line 1. Unquoted string "two" may clash with future reserved word at -e line 1. Unquoted string "five" may clash with future reserved word at -e line 1. Name "main::two" used only once: possible typo at -e line 1. Argument "five" isn't numeric in numeric eq (==) at -e line 1. Argument "two" isn't numeric in numeric eq (==) at -e line 1. print() on unopened filehandle two at -e line 1. (The first four are compile-time warnings; the last three are issued at execution time.) The crucial warning is the one at the end, advising us that the output of print was directed to the filehandle two which was never opened for output. [ Addendum 20140718: I keep thinking of the following remark of Edsger W. Dijkstra: [This phenomenon] takes one of two different forms: one programmer places a one-line program on the desk of another and … says, "Guess what it does!" From this observation we must conclude that this language as a tool is an open invitation for clever tricks; and while exactly this may be the explanation for some of its appeal, viz., to those who like to show how clever they are, I am sorry, but I must regard this as one of the most damning things that can be said about a programming language. But my intent is different than what Dijkstra describes. His programmer is proud, but I am discgusted. Incidentally, I believe that Dijkstra was discussing APL here. ]

Haskell data analysis: Reading NetCDF files

Planet Haskell - 18 hours 15 min ago
Haskell data analysis: Reading NetCDF files July 16, 2014 I never really intended the FFT stuff to go on for as long as it did, since that sort of thing wasn’t really what I was planning as the focus for this Data Analysis in Haskell series. The FFT was intended primarily as a “warm-up” exercise. After fourteen blog articles and about 10,000 words, everyone ought to be sufficiently warmed up now… Instead of trying to lay out any kind of fundamental principles for data analysis before we get going, I’m just going to dive into a real example. I’ll talk about generalities as we go along when we have some context in which to place them. All of the analysis described in this next series of articles closely follows that in the paper: D. T. Crommelin (2004). Observed nondiffusive dynamics in large-scale atmospheric flow. J. Atmos. Sci. 61(19), 2384–2396. We’re going to replicate most of the data analysis and visualisation from this paper, maybe adding a few interesting extras towards the end. It’s going to take a couple of articles to lay out some of the background to this problem, but I want to start here with something very practical and not specific to this particular problem. We’re going to look at how to gain access to meteorological and climate data stored in the NetCDF file format from Haskell. This will be useful not only for the low-frequency atmospheric variability problem we’re going to look at, but for other things in the future too. The NetCDF file format The NetCDF file format is a “self-describing” binary format that’s used a lot for storing atmospheric and oceanographic data. It’s “self-describing” in the sense that the file format contains metadata describing the spatial and temporal dimensions of variables, as well as optional information about units and a bunch of other stuff. It’s a slightly intimidating format to deal with at first, but we’ll only need to know how a subset of it works. (And it’s much easier to deal with than HDF5, which we’ll probably get around to when we look at some remote sensing data at some point.) So, here’s the 30-second introduction to NetCDF. A NetCDF file contains dimensions, variables and attributes. A NetCDF dimension just has a name and a size. One dimension can be specified as an “unlimited” or record dimension, which is usually used for time series, and just means that you can tack more records on the end of the file. A NetCDF variable has a name, a type, a list of dimensions, some attributes and some data. As well as attributes attached to variables, a NetCDF file can also have some file-level global attributes. A NetCDF attribute has a name, a type and a value. And that’s basically it (for NetCDF-3, at least; NetCDF-4 is a different beast, but I’ve never seen a NetCDF-4 file in the wild, so I don’t worry about it too much). An example NetCDF file That’s very abstract, so let’s look at a real example. The listing below shows the output from the ncdump tool for one of the data files we’re going to be using, which stores a variable called geopotential height (I’ll explain exactly what this is in a later article – for the moment, it’s enough to know that it’s related to atmospheric pressure). The ncdump tool is useful for getting a quick look at what’s in a NetCDF file – it shows all the dimension and variable definitions, all attributes and also dumps the entire data contents of the file as ASCII (which you usually want to chop off…). netcdf z500-1 { dimensions: longitude = 144 ; latitude = 73 ; time = 7670 ; variables: float longitude(longitude) ; longitude:units = "degrees_east" ; longitude:long_name = "longitude" ; float latitude(latitude) ; latitude:units = "degrees_north" ; latitude:long_name = "latitude" ; int time(time) ; time:units = "hours since 1900-01-01 00:00:0.0" ; time:long_name = "time" ; short z500(time, latitude, longitude) ; z500:scale_factor = 0.251043963537454 ; z500:add_offset = 50893.8041655182 ; z500:_FillValue = -32767s ; z500:missing_value = -32767s ; z500:units = "m**2 s**-2" ; z500:long_name = "Geopotential" ; z500:standard_name = "geopotential" ; // global attributes: :Conventions = "CF-1.0" ; :history = "Sun Feb 9 18:46:25 2014: ncrename -v z,z500 z500-1.nc\n", "2014-01-29 21:04:31 GMT by grib_to_netcdf-1.12.0: grib_to_netcdf /data/soa/scra tch/netcdf-web237-20140129210048-3022-3037.target -o /data/soa/scratch/netcdf-web237-20140129210411-3022 -3038.nc" ; data: longitude = 0, 2.5, 5, 7.5, 10, 12.5, 15, 17.5, 20, 22.5, 25, 27.5, 30, 32.5, 35, 37.5, 40, 42.5, 45, 47.5, 50, 52.5, 55, 57.5, 60, 62.5, 65, 67.5, 70, 72.5, 75, 77.5, 80, 82.5, 85, 87.5, 90, 92.5, 95, 97.5, 100, 102.5, 105, 107.5, 110, 112.5, 115, 117.5, 120, 122.5, 125, 127.5, 130, 132.5, 135, 137.5, 140, 142.5, 145, 147.5, 150, 152.5, 155, 157.5, 160, 162.5, 165, 167.5, 170, 172.5, 175, 177.5, 180, 182.5, 185, 187.5, 190, 192.5, 195, 197.5, 200, 202.5, 205, 207.5, 210, 212.5, 215, 217.5, 220, 222.5, 225, 227.5, 230, 232.5, 235, 237.5, 240, 242.5, 245, 247.5, 250, 252.5, 255, 257.5, 260, 262.5, 265, 267.5, 270, 272.5, 275, 277.5, 280, 282.5, 285, 287.5, 290, 292.5, 295, 297.5, 300, 302.5, 305, 307.5, 310, 312.5, 315, 317.5, 320, 322.5, 325, 327.5, 330, 332.5, 335, 337.5, 340, 342.5, 345, 347.5, 350, 352.5, 355, 357.5 ; latitude = 90, 87.5, 85, 82.5, 80, 77.5, 75, 72.5, 70, 67.5, 65, 62.5, 60, 57.5, 55, 52.5, 50, 47.5, 45, 42.5, 40, 37.5, 35, 32.5, 30, 27.5, 25, 22.5, 20, 17.5, 15, 12.5, 10, 7.5, 5, 2.5, 0, -2.5, -5, -7.5, -10, -12.5, -15, -17.5, -20, -22.5, -25, -27.5, -30, -32.5, -35, -37.5, -40, -42.5, -45, -47.5, -50, -52.5, -55, -57.5, -60, -62.5, -65, -67.5, -70, -72.5, -75, -77.5, -80, -82.5, -85, -87.5, -90 ; As shown in the first line of the listing, this file is called z500-1.nc (it’s contains daily 500 millibar geopotential height data). It has dimensions called longitude, latitude and time. There are variables called longitude, latitude, time and z500. The variables with names that are the same as dimensions are called coordinate variables and are part of a metadata convention that provides information about the file dimensions. The NetCDF file format itself doesn’t require that dimensions have any more information provided for them than their name and size, but for most applications, it makes sense to give units and values for points along the dimensions. If we look at the longitude variable, we see that it’s of type float and has one dimension, which is the longitude dimension – this is how you tell a coordinate variable from a data variable: it will have the same name as the dimension it goes with and will be indexed just by that dimension. Immediately after the line defining the longitude variable are the attributes for the variable. Here they give units and a display name (they can also give information about the range of values and the orientation of the coordinate axis). All of these attributes are again defined by a metadata convention, but they’re mostly pretty easy to figure out. Here, the longitude is given in degrees east of the prime meridian, and if we look further down the listing, we can see the data values for the longitude variable, running from zero degrees to 357.5°E. From all this, we can infer that the 144 longitude values in the file start at the prime meridian and increase eastwards. Similarly, the latitude variable is a coodinate variable for the latitude dimension, and specifies the latitude of points on the globe. The latitude is measured in degrees north of the equator and ranges from 90° (the North pole) to -90° (the South pole). Taking a look at the data values for the latitude variable, we can see that 90 degrees north is in index 0, and the 73 latitude values decrease with increasing index until we reach the South pole. The time coordinate variable is a little more interesting, mostly because of its units – this “hours since YYYY-MM-DD HH:MM:SS” approach to time units is very common in NetCDF files and it’s usually pretty easy to work with. Finally, we get on to the data variable, z500. This is defined on a time/latitude/longitude grid (so, in the data, the longitude is the fastest changing coordinate). The variable has one slightly odd feature: its type. The types for the coordinate variables were all float or double, as you’d expect, but z500 is declared to be a short integer value. Why? Well, NetCDF files are quite often big so it can make sense to use some sort of encoding to reduce file sizes. (I worked on a paleoclimate modelling project where each model simulation resulted in about 200 Gb of data, for a dozen models for half a dozen different scenarios. In “Big Data” terms, it’s not so large, but it’s still quite a bit of data for people to download from a public server.) Here, the real-valued geopotential height is packed into a short integer. The true value of the field can be recovered from the short integer values in the file using the add_offset and scale_factor attributes – here the scale factor is unity, so we just need to add the add_offset to each value in the file to get the geopotential height. Last of all we have the global attributes in the file. The most interesting of these is the Conventions attribute, which specifies that the file uses the CF metadata convention. This is the convention that defines how coordinate variables are represented, how data values can be compressed by scaling and offsetting, how units and axes are represented, and so on. Given a NetCDF file using the CF convention (or another related convention called the COARDS metadata convention), it’s pretty straightforward to figure out what’s going on. Reading NetCDF files in Haskell So, how do we read NetCDF files into a Haskell program to work on them? I’ve seen a few Haskell FFI bindings to parts of the main NetCDF C library, but none of those really seemed satisfactory for day-to-day use, so I’ve written a simple library called hnetcdf that includes both a low-level wrapping of the C library and a more idiomatic Haskell interface (which is what we’ll be using). In particular, because NetCDF data is usually grid-based, hnetcdf supports reading data values into a number of different kinds of Haskell arrays (storable Vectors, Repa arrays and hmatrix arrays). For this analysis, we’re going to use hmatrix vectors and matrices, since they provide a nice “Matlab in Haskell” interface for doing the sort of linear algebra we’ll need. In this section, we’ll look at some simple code for accessing the NetCDF file whose contents we looked at above which will serve as a basis for the more complicated things we’ll do later. (The geopotential height data we’re using here is from the ERA-Interim reanalysis project – again, I’ll explain what “reanalysis” means in a later article. For the moment, think of it as a “best guess” view of the state of the atmosphere at different moments in time.) We’ll open the NetCDF file, show how to access the file metadata and how to read data values from coordinate and data variables. We need a few imports first, along with a couple of useful type synonyms for return values from hnetcdf functions: import Prelude hiding (length, sum) import Control.Applicative ((<$>)) import qualified Data.Map as M import Foreign.C import Foreign.Storable import Numeric.Container import Data.NetCDF import Data.NetCDF.HMatrix type VRet a = IO (Either NcError (HVector a)) type MRet a = IO (Either NcError (HRowMajorMatrix a)) As well as a few utility imports and the Numeric.Container module from the hmatrix library, we import Data.NetCDF and Data.NetCDF.HMatrix – the first of these is the general hnetcdf API and the second is the module that allows us to use hnetcdf with hmatrix. Most of the functions in hnetcdf handle errors by returning an Either of NcError and a “useful” return type. The VRet and MRet type synonyms represent return values for vectors and matrices respectively. When using hnetcdf, it’s often necessary to supply type annotations to control the conversion from NetCDF values to Haskell values, and these type synonyms come in handy for doing this. Reading NetCDF metadata Examining NetCDF metadata is simple: Right nc <- openFile "/big/data/reanalysis/ERA-Interim/z500-1.nc" putStrLn $ "Name: " ++ ncName nc putStrLn $ "Dims: " ++ show (M.keys $ ncDims nc) putStr $ unlines $ map (\(n, s) -> " " ++ n ++ ": " ++ s) $ M.toList $ flip M.map (ncDims nc) $ \d -> show (ncDimLength d) ++ if ncDimUnlimited d then " (UNLIM)" else "" putStrLn $ "Vars: " ++ show (M.keys $ ncVars nc) putStrLn $ "Global attributes: " ++ show (M.keys $ ncAttrs nc) let Just ntime = ncDimLength <$> ncDim nc "time" Just nlat = ncDimLength <$> ncDim nc "latitude" Just nlon = ncDimLength <$> ncDim nc "longitude" We open a file using hnetcdf’s openFile function (here assuming that there are no errors), getting a value of type NcInfo (defined in Data.NetCDF.Metadata in hnetcdf). This is a value representing all of the metadata in the NetCDF file: dimension, variable and attribute definitions all bundled up together into a single value from which we can access different metadata elements. We can access maps from names to dimension, variable and global attribute definitions and can then extract individual dimensions and variables to find information about them. The code in the listing above produces this output for the ERA-Interim Z500 NetCDF file used here: Name: /big/data/reanalysis/ERA-Interim/z500-1.nc Dims: ["latitude","longitude","time"] latitude: 73 longitude: 144 time: 7670 Vars: ["latitude","longitude","time","z500"] Global attributes: ["Conventions","history"] Accessing coordinate values Reading values from a NetCDF file requires a little bit of care to ensure that NetCDF types are mapped correctly to Haskell types: let (Just lonvar) = ncVar nc "longitude" Right (HVector lon) <- get nc lonvar :: VRet CFloat let mlon = mean lon putStrLn $ "longitude: " ++ show lon ++ " -> " ++ show mlon Right (HVector lon2) <- getS nc lonvar [0] [72] [2] :: VRet CFloat let mlon2 = mean lon2 putStrLn $ "longitude (every 2): " ++ show lon2 ++ " -> " ++ show mlon2 This shows how to read values from one-dimensional coordinate variables, both reading the whole variable, using hnetcdf’s get function, and reading a strided slice of the data using the getS function. In both cases, it’s necessary to specify the return type of get or getS explicitly – here this is done using the convenience type synonym VRet defined earlier. This code fragment produces this output: longitude: fromList [0.0,2.5,5.0,7.5,10.0,12.5,15.0,17.5,20.0,22.5,25.0, 27.5,30.0,32.5,35.0,37.5,40.0,42.5,45.0,47.5,50.0,52.5,55.0,57.5,60.0, 62.5,65.0,67.5,70.0,72.5,75.0,77.5,80.0,82.5,85.0,87.5,90.0,92.5,95.0, 97.5,100.0,102.5,105.0,107.5,110.0,112.5,115.0,117.5,120.0,122.5,125.0, 127.5,130.0,132.5,135.0,137.5,140.0,142.5,145.0,147.5,150.0,152.5,155.0, 157.5,160.0,162.5,165.0,167.5,170.0,172.5,175.0,177.5,180.0,182.5,185.0, 187.5,190.0,192.5,195.0,197.5,200.0,202.5,205.0,207.5,210.0,212.5,215.0, 217.5,220.0,222.5,225.0,227.5,230.0,232.5,235.0,237.5,240.0,242.5,245.0, 247.5,250.0,252.5,255.0,257.5,260.0,262.5,265.0,267.5,270.0,272.5,275.0, 277.5,280.0,282.5,285.0,287.5,290.0,292.5,295.0,297.5,300.0,302.5,305.0, 307.5,310.0,312.5,315.0,317.5,320.0,322.5,325.0,327.5,330.0,332.5,335.0, 337.5,340.0,342.5,345.0,347.5,350.0,352.5,355.0,357.5] -> 178.75 longitude (every 2): fromList [0.0,5.0,10.0,15.0,20.0,25.0,30.0,35.0,40.0, 45.0,50.0,55.0,60.0,65.0,70.0,75.0,80.0,85.0,90.0,95.0,100.0,105.0,110.0, 115.0,120.0,125.0,130.0,135.0,140.0,145.0,150.0,155.0,160.0,165.0,170.0, 175.0,180.0,185.0,190.0,195.0,200.0,205.0,210.0,215.0,220.0,225.0,230.0, 235.0,240.0,245.0,250.0,255.0,260.0,265.0,270.0,275.0,280.0,285.0,290.0, 295.0,300.0,305.0,310.0,315.0,320.0,325.0,330.0,335.0,340.0,345.0,350.0, 355.0] -> 177.5 The mean function used in above is defined as: mean :: (Storable a, Fractional a) => Vector a -> a mean xs = (foldVector (+) 0 xs) / fromIntegral (dim xs) It requires a Storable type class constraint, and makes use of hmatrix’s foldVector function. Accessing data values Finally, we get round to reading the data that we’re interested in (of course, reading the metadata is a necessary prerequisite for this: this kind of geospatial data doesn’t mean much unless you can locate it in space and time, for which you need coordinate variables and their associated metadata). The next listing shows how we read the Z500 data into a row-major hmatrix matrix: let (Just zvar) = ncVar nc "z500" putStrLn $ "z500 dims: " ++ show (map ncDimName $ ncVarDims zvar) Right slice1tmp <- getA nc zvar [0, 0, 0] [1, nlat, nlon] :: MRet CShort let (HRowMajorMatrix slice1tmp2) = coardsScale zvar slice1tmp :: HRowMajorMatrix CDouble slice1 = cmap ((/ 9.8) . realToFrac) slice1tmp2 :: Matrix Double putStrLn $ "size slice1 = " ++ show (rows slice1) ++ " x " ++ show (cols slice1) putStrLn $ "lon(i=25) = " ++ show (lon @> (25 - 1)) putStrLn $ "lat(j=40) = " ++ show (lat @> (nlat - 40)) let v @!! (i, j) = v @@> (nlat - i, j - 1) putStrLn $ "slice1(i=25,j=40) = " ++ show (slice1 @!! (25, 40)) There are a number of things to note here. First, we use the getA function, which allows us to specify starting indexes and counts for each dimension in the variable we’re reading. Here we read all latitude and longitude points for a single vertical level in the atmosphere (which is the only one there is in this file). Second, the values stored in this file are geopotential values, not geopotential height (so their units are m s-2 instead of metres, which we can convert to geopotential height by dividing by the acceleration due to gravity (about 9.8 m s-2). Third, the geopotential values are stored in a compressed form as short integers according to the COARDS metadata convention. This means that if we want to work with floating point values (which we almost always do), we need to convert using the hnetcdf coardsScale function, which reads the relevant scaling and offset attributes from the NetCDF variable and uses them to convert from the stored data values to some fractional numeric type (in this case CDouble – the destination type also needs to be an instance of hnetcdf’s NcStorable class). Once we have the input data converted to a normal hmatrix Matrix value, we can manipulate it like any other data value. In particular, here we extract the geopotential height at given latitude and longitude coordinates (the @!! operator defined here is just a custom indexing operator to deal with the fact that the latitude values are stored in north-to-south order). The most laborious part of all this is managing the correspondence between coordinate values and indexes, and managing the conversions between the C types used to represent values stored in NetCDF files (CDouble, CShort, etc.) and the native Haskell types that we’d like to use for our data manipulation activities. To be fair, the first of these problems is a problem for any user of NetCDF files, and Haskell’s data abstraction capabilities at least make dealing with metadata values less onerous than in C or C++. The second issue is a little more annoying, but it does ensure that we maintain a good cordon sanitaire between external representations of data values and the internal representations that we use. What’s next We’re going to have to spend a couple of articles covering some background to the atmospheric variability problem we’re going to look at, just to place some of this stuff in context: we need to look a little at just what this study is trying to address, we need to understand some basic facts about atmospheric dynamics and the data we’re going to be using, and we need to take a look at the gross dynamics of the atmosphere as they appear in these data, just so that we have some sort of idea what we’re looking at later on. That will probably take two or three articles, but then we can start with some real data analysis. haskell data-analysis (function () { var articleId = fyre.conv.load.makeArticleId(null); fyre.conv.load({}, [{ el: 'livefyre-comments', network: "livefyre.com", siteId: "290329", articleId: articleId, signed: false, collectionMeta: { articleId: articleId, url: fyre.conv.load.makeCollectionUrl(), } }], function() {}); }()); 2014-07-16T21:09:57Z

Guess what this does

Planet Haskell - 18 hours 15 min ago
Here's a Perl quiz that I confidently predict nobody will get right. Without trying it first, what does the following program print? perl -le 'print(two + two == five ? "true" : "false")' (I will discuss the surprising answer tomorrow.)

Notes on the Advanced Akka course

Planet Haskell - 18 hours 15 min ago
The Advanced Akka course is provided by Typesafe and is aimed at teaching advanced usages of Akka. The course covers the basics of Akka, Remoting, Clustering, Routers, CRDTs, Cluster Sharding and Akka Persistance. The following post starts with a general introduction to Akka and presents the takeaways from the course as we experienced them. A general overview of Akka The reader which is already familiar with Akka can skip this section. According to the Akka site this is Akka: Akka is a toolkit and runtime for building highly concurrent, distributed, and fault tolerant event-driven applications on the JVM. Akka achieves this by using Actors. Actors are very lightweight concurrent entities. Each Actor has a corresponding mailbox stored separately from the Actor. The Actors together with their mailboxes reside in an ActorSystem. Additionally, the ActorSystem contains the Dispatcher which executes the handling of a message by an actor. Each Actor only handles a single message at a time. In Akka everything is remote by design and philosophy. In practice this means that each Actor is identified by its ActorRef. This is a reference to the actor which provides Location Transparency. Actors communicate with each other by sending messages to an another Actor through an ActorRef. This sending of the message takes virtually no time. In addition to ActorRef there exists also an ActorSelection which contains a path to one or more actors. Upon each sending of the message the path is traversed until the actor is found or when not. No message is send back when the actor is not found however. States: Started - Stopped - Terminated If an actor enters the Stopped state it first stops its child actors before entering the Terminated state. Best-practices Import the context.dispatcher instead of the global Scala ExecutionContext. It is the ExecutionContext managed by Akka. Using the global context causes the Actors to be run in the global Thread pool. You should not use PoisonPill as it will be removed from future versions of Akka since it is not specific enough. Roll your own message to make sure the appropriate actions for graceful shutdown are done. Use context.stop to stop your actor. Place your business logic in a separate trait and mix it in to the actor. This increases testability as you can easily unit test the trait containing the business logic. Also, you should put the creation of any child actors inside a separate method so the creation can be overridden from tests. Remoting With the Remoting extension it is possible to communicate with other Actor Systems. This communication is often done through ActorSelections instead of ActorRef. Remoting uses Java serialisation by default which is slow and fragile in light of changing definitions. It is possible and recommended to use another mechanism such as Google Protobuf. Clustering Akka has a simple perspective on cluster management with regards to split-brain scenarios. Nodes become dead when they are observed as dead and they cannot resurrect. The only way a node can come up again is if it registers itself again. When a net split happens the other nodes are marked as unreachable. When using a Singleton, this means that only the nodes that can reach the singleton will access it. The others will not decide on a new Singleton in order to prevent a split-brain scenario. Another measure against split-brain is contacting the seed nodes in order. The first seed node is required to be up. The seed nodes are tried in order. FSM There is an library for writing finite state machines called FSM. For larger actors it can be useful to use the FSM. Otherwise stick to pure become and unbecome. FSM also has an interval timer for scheduling messages. However, the use of stay() resets the interval timer therefore you could have issues with never executing what is at the end of the timer. Routers There are two different kinds of routers: Pools and Groups. Pools are in charge of their own children and they are created and killed by the pool. Groups are configured with an ActorSelection that defines the actors to which the group should sent its messages. There are several implementations: Consistent Hash, Random, Round Robin, BroadCast, Scatter - Gather First, and Smallest Mailbox. The names are self-explanatory. Synchronisation of data with CRDTs Synchronising data between multiple nodes can be done by choosing your datatype so that If the timestamps and events are generated in one place no duplicate entries occur. Therefore merging a map from a different node in your map is easily done by copying entries you don’t already have to your own data. This can be implemented by letting each member node broadcast which data-points they have. Each node can then detect which information is lacking and request the specific data from the node that claimed to have the data. At some future point in time all nodes will be in sync. This is called eventual consistency. Singleton If you have a singleton cluster manager proxy it only starts when the cluster is formed. A cluster is formed if a member connects. The proxy will then pass on the buffered messages. Cluster Sharding Sharding is a way to split up a group of actors in a cluster. This can be useful if the group is too large to fit in the memory of a single machine. The Cluster Sharding feature takes care of the partitioning of the actors using a hash you have to define with a function shardResolver. The sharded actors can be messaged with an unique identifier using ClusterSharding(system).shardRegion("Counter") which proxies the message to the correct actor. ClusterSharding.start is what the Manager is to Singletons. It is recommended to put the sharding functions into a singleton object for easy re-use of your shards, containing the functions to start the sharding extension and proxy to the shard etc. It is also convenient to adds tell and initialise helper functions to respectively send a message and initialise the actor by its unique id. Akka Persistence Akka persistence uses a Journal to store which messages were processed. One of the supported storage mechanisms is Cassandra. It is also possible to use a file-based journal which, of course, is not recommended. In the current version of Akka there are two approaches to persistence: command sourcing and event sourcing. Simply but, in command storing each message is first persisted and then offered to the actor to do as it pleases whereas in event sourcing only the results of actions are persisted. The latter is preferred and will be the only remaining method in following versions. Both methods support storing a snapshot of the current state and recovering from it. Command Sourcing The main problem with command sourcing lies in that all messages are replayed. This includes requests for information from dead actors which wastes resources for nothing. Moreover, in case of errors, the last message that killed the actor is also replayed and probably killing the actor again in the proces. Event Sourcing With event sourcing one only stores state changing events. Events are received by the receiveRecover method. External side-effects should be performed in the receive method. The code for the internal side-effect of the event should be the same in both the receive and receiveRecover methods. The actor or trait for this will be named PersistentActor. Actor offloading One can use Akka Persistence to “pause” long living actors, e.g. actors that have seen no activity lately. This frees up memory. When the actor is needed again it can be safely restored from the persistence layer. Tidbits Akka 3 is to be released “not super soon”. It will contain typed actors. The consequence of this is that the sender field will be removed from the actor. Therefore, for request-response, the ActorRef should be added to the request itself. Concluding The Advanced Akka course gives a lot of insights and concrete examples of how to use the advanced Akka features of clustering, sharding and persisting data across multiple nodes in order to create a system that really is highly available, resilient and scalable. It also touches on the bleeding edge functionalities, the ideas and concepts around it and what to expect next in this growing ecosystem.

Examining Hackage: extensible-effects

Planet Haskell - 18 hours 15 min ago
Posted on July 15, 2014 I had a few people tell me after my last post that they would enjoy a write up on reading extensible-effects so here goes. I’m going to document my process of reading through and understanding how extensible-effects is implemented. Since this is a fairly large library (about 1k) of code, we’re not going over all of it. Rather we’re just reviewing the core modules and enough of the extra ones to get a sense for how everything is implemented. If you’re curious or still have questions, the modules that we don’t cover should serve as a nice place for further exploration. Which Modules extensible-effects comes with quite a few modules, my find query reveals $ find src -name "*.hs" src/Data/OpenUnion1.hs src/Control/Eff/Reader/Strict.hs src/Control/Eff/Reader/Lazy.hs src/Control/Eff/Fresh.hs src/Control/Eff/Cut.hs src/Control/Eff/Exception.hs src/Control/Eff/State/Strict.hs src/Control/Eff/State/Lazy.hs src/Control/Eff/Writer/Strict.hs src/Control/Eff/Writer/Lazy.hs src/Control/Eff/Coroutine.hs src/Control/Eff/Trace.hs src/Control/Eff/Choose.hs src/Control/Eff/Lift.hs src/Control/Eff.hs src/Control/Eff/Reader/Strict.hs Whew! Well I’m going to take a leap and assume that extensible-effects is similar to the mtl in the sense that there are a few core modules, an then a bunch of “utility” modules. So there’s Control.Monad.Trans and then Control.Monad.State and a bunch of other implementations of MonadTrans. If we assume extensible-effects is formatted like this, then we need to look at Data.OpenUnion1 Control.Monad.Eff And maybe a few other modules to get a feel for how to use these two. I’ve added Data.OpenUnion1 because it’s imported by Control.Monad.Eff so is presumably important. Since Data.OpenUnion1 is at the top of our dependency DAG, we’ll start with it. Data.OpenUnion1 So we’re starting with Data.OpenUnion1. If the authors of this code have stuck to normal Haskell naming conventions, that’s an open union of type constructors, stuff with the kind * -> *. Happily, this module has an export list so we can at least see what’s public. module Data.OpenUnion1( Union (..) , SetMember , Member , (:>) , inj , prj , prjForce , decomp , unsafeReUnion ) where So we’re looking at a data type Union, which we export everything for. Two type classes SetMember and Member, a type operator :>, and a handful of functions, most likely to work with Union. So let’s figure out exactly what this union thing is data Union r v = forall t. (Functor t, Typeable1 t) => Union (t v) So Union r v is just a wrapper around some of functor applied to v. This seems a little odd, what’s this r thing? The docs hint that Member t r should always hold. Member is a type class of two parameters with no members. In fact, greping the entire source reveals that the entire definition and instances for Member in this code base is infixr 1 :> data ((a :: * -> *) :> b) class Member t r instance Member t (t :> r) instance Member t r => Member t (t' :> r) So this makes it a bit clearer, :> acts like a type level cons and Member just checks for membership! Now Union makes a bit more sense, especially in light of the inj function inj :: (Functor t, Typeable1 t, Member t r) => t v -> Union r v inj = Union So Union takes some t in r and hides it away in an existential applied to v. Now this is kinda like having a great nested bunch of Eithers with every t applied to v. Dual to inj, we can define a projection from a Union to some t in r. This will need to return something wrapped in Maybe since we don’t know which member of r our Union is wrapping. prj :: (Typeable1 t, Member t r) => Union r v -> Maybe (t v) prj (Union v) = runId <$> gcast1 (Id v) prj does some evil Typeable casts, but this is necessary since we’re throwing away all our type information with that existential. That Id runId pair is needed since gcast1 has the type -- In our case, `c ~ Id` gcast1 :: (Typeable t', Typeable t) => c (t a) -> Maybe (c (t' a)) They’re just defined as newtype Id a = Id { runId :: a } deriving Typeable so just like Control.Monad.Identity. Now let’s try to figure out what this SetMember thing is. class Member t r => SetMember set (t :: * -> *) r | r set -> t instance SetMember set t r => SetMember set t (t' :> r) This is unhelpful, all we have is the recursive step with no base case! Resorting to grep reveals that our base case is defined in Control.Eff.Lift so we’ll temporarily put this class off until then. Now the rest of the file is defining a few functions to operate over Unions. First up is an unsafe “forced” version of prj. infixl 4 <?> (<?>) :: Maybe a -> a -> a Just a <?> _ = a _ <?> a = a prjForce :: (Typeable1 t, Member t r) => Union r v -> (t v -> a) -> a prjForce u f = f <$> prj u <?> error "prjForce with an invalid type" prjForce is really exactly what it says on the label, it’s a version of prj that throws an exception if we’re in the wrong state of Union. Next is a way of unsafely rejiggering the type level list that Union is indexed over. unsafeReUnion :: Union r w -> Union t w unsafeReUnion (Union v) = Union v We need this for our last function, decom. This function partially unfolds our Union into an Either decomp :: Typeable1 t => Union (t :> r) v -> Either (Union r v) (t v) decomp u = Right <$> prj u <?> Left (unsafeReUnion u) This provides a way to actually do some sort of induction on r by breaking out each type piece by piece with some absurd case for when we don’t have a :> b. That about wraps up this little Union library, let’s move on to see how this is actually used. Control.Eff Now let’s talk about the core of extensible-effects, Control.Eff. As always we’ll start by taking a look at the export list module Control.Eff( Eff (..) , VE (..) , Member , SetMember , Union , (:>) , inj , prj , prjForce , decomp , send , admin , run , interpose , handleRelay , unsafeReUnion ) where So right away we can see that we’re exporting stuff Data.Union1 as well as several new things, including the infamous Eff. The first definition we come across in this module is VE. VE is either a simple value or a Union applied to a VE! data VE r w = Val w | E !(Union r (VE r w)) Right away we notice that “pure value or X” pattern we see with free monads and other abstractions over effects. We also include a quick function to try to extract a pure value form Vals fromVal :: VE r w -> w fromVal (Val w) = w fromVal _ = error "extensible-effects: fromVal was called on a non-terminal effect." Now we’ve finally reached the definition of Eff! newtype Eff r a = Eff { runEff :: forall w. (a -> VE r w) -> VE r w } So Eff bears a striking resemblance to Cont. There are two critical differences though, first is that we specialize our return type to something constructed with VE r. The second crucial difference is that by universally quantifying over w we sacrifice a lot of the power of Cont, including callCC! Next in Control.Eff is the instances for Eff instance Functor (Eff r) where fmap f m = Eff $ \k -> runEff m (k . f) {-# INLINE fmap #-} instance Applicative (Eff r) where pure = return (<*>) = ap instance Monad (Eff r) where return x = Eff $ \k -> k x {-# INLINE return #-} m >>= f = Eff $ \k -> runEff m (\v -> runEff (f v) k) {-# INLINE (>>=) #-} Notice that these are all really identical to Conts instances. Functor adds a function to the head of the continuation. Monad dereferences m and feeds the result into f. Exactly as with Cont. Next we can look at our primitive function for handling effects send :: (forall w. (a -> VE r w) -> Union r (VE r w)) -> Eff r a send f = Eff (E . f) I must admit, this tripped me up for a while. Here’s how I read it, “provide a function, which when given a continuation for the rest of the program expecting an a, produces a side effecting VE r w and we’ll map that into Eff”. Remember how Union holds functors? Well each of our effects must act like as a functor and wrap itself in that union. By being open, we get the “extensible” in extensible-effects. Next we look at how to remove effects once they’ve been added to our set of effects. In mtl-land, this is similar to the collection of runFooT functions that are used to gradually strip a layer of transformers away. The first step towards this is to transform the CPS-ed effectful computation Eff, into a more manageable form, VE admin :: Eff r w -> VE r w admin (Eff m) = m Val This is a setup step so that we can traverse the “tree” of effects that our Eff monad built up for us. Next, we know that we can take an Eff with no effects and unwrap it into a pure value. This is the “base case” for running an effectful computation. run :: Eff () w -> w run = fromVal . admin Concerned readers may notice that we’re using a partial function, this is OK since the E case is “morally impossible” since there is no t so that Member t () holds. Next is the function to remove just one effect from an Eff handleRelay :: Typeable1 t => Union (t :> r) v -- ^ Request -> (v -> Eff r a) -- ^ Relay the request -> (t v -> Eff r a) -- ^ Handle the request of type t -> Eff r a handleRelay u loop h = either passOn h $ decomp u where passOn u' = send (<$> u') >>= loop Next to send, this function gave me the most trouble. The trick was to realize that that decomp will leave us in two cases. Some effect producing a v, Union r v A t producing a v, t v If we have a t v, then we’re all set since we know exactly how to map that to a Eff r a with h. Otherwise we need to take this effect, add it back into our computation. send (<$> u') takes the rest of the computation, that continuation and feeds it the v that we know our effects produce. This gives us the type Eff r v, where that outer Eff r contains our most recent effect as well as everything else. Now to convert this to a Eff r a we need to transform that v to an a. The only way to do that is to use the supplied loop function so we just bind to that. Last but not least is a function to modify an effect somewhere in our effectful computation. A grep reveals will see this later with things like local from Control.Eff.Reader for example. To do this we want something like handleRelay but without removing t from r. We also need to generalize the type so that t can be anywhere in our. Otherwise we’ll have to prematurally solidify our stack of effects to use something like modify. interpose :: (Typeable1 t, Functor t, Member t r) => Union r v -> (v -> Eff r a) -> (t v -> Eff r a) -> Eff r a interpose u loop h = maybe (send (<$> u) >>= loop) h $ prj u Now this is almost identical to handleRelay except instead of using decomp which will split off t and only works when r ~ t :> r', we use prj! This gives us a Maybe and since the type of u doesn’t need to change we just recycle that for the send (<$> u) >>= loop sequence. That wraps up the core of extensible-effects, and I must admit that when writing this I was still quite confused as to actually use Eff to implement new effects. Reading a few examples really helped clear things up for me. Control.Eff.State The State monad has always been the sort of classic monad example so I suppose we’ll start here. module Control.Eff.State.Lazy( State (..) , get , put , modify , runState , evalState , execState ) where So we’re not reusing the State from Control.Monad.State but providing our own. It looks like data State s w = State (s -> s) (s -> w) So what is this supposed to do? Well that s -> w looks a continuation of sorts, it takes the state s, and produces the resulting value. The s -> s looks like something that modify should use. Indeed this is the case modify :: (Typeable s, Member (State s) r) => (s -> s) -> Eff r () modify f = send $ \k -> inj $ State f $ \_ -> k () put :: (Typeable e, Member (State e) r) => e -> Eff r () put = modify . const we grab the continuation from send and add a State effect on top which uses our modification function s. The continuation that State takes ignores the value it’s passed, the current state, and instead feeds the program computation the () it’s expecting. get is defined in a similar manner, but instead of modifying the state, we use State’s continuation to feed the program the current state. get :: (Typeable e, Member (State e) r) => Eff r e get = send (inj . State id) So we grab the continuation, feed it to a State id which won’t modify the state, and then inject that into our open union of effects. Now that we have the API for working with states, let’s look at how to remove that effect. runState :: Typeable s => s -- ^ Initial state -> Eff (State s :> r) w -- ^ Effect incorporating State -> Eff r (s, w) -- ^ Effect containing final state and a return value runState s0 = loop s0 . admin where loop s (Val x) = return (s, x) loop s (E u) = handleRelay u (loop s) $ \(State t k) -> let s' = t s in loop s' (k s') runState first preps our effect to be pattern matched on with admin. We then start loop with the initial state. loop has two components, if we have run into a value, then we don’t interpret any effects, just stick the state and value together and return them. If we do have an effect, we use handleRelay to split out the State s from our effects. To handle the case where we get a VE w, we just loop with the current state. However, if we get a State t k, we update the state with t and pass the continuation k. From runState evalState and execState. evalState :: Typeable s => s -> Eff (State s :> r) w -> Eff r w evalState s = fmap snd . runState s execState :: Typeable s => s -> Eff (State s :> r) w -> Eff r s execState s = fmap fst . runState s That wraps up the interface for Control.Eff.State. The nice bit is this makes it a lot clearer how to use send, handleRelay and a few other functions from the core. Control.Eff.Reader Now we’re on to Reader. The interesting thing here is that local highlights how to use interpose properly. As always, we start by looking at what exactly this module provides module Control.Eff.Reader.Lazy( Reader (..) , ask , local , reader , runReader ) where The definition of Reader is refreshingly simple newtype Reader e v = Reader (e -> v) Keen readers will note that this is just half of the State definition which makes sense; Reader is half of State. ask is defined almost identically to get ask :: (Typeable e, Member (Reader e) r) => Eff r e ask = send (inj . Reader) We just feed the continuation for the program into Reader. A simple wrapper over this gives our equivalent of reads reader :: (Typeable e, Member (Reader e) r) => (e -> a) -> Eff r a reader f = f <$> ask Next up is local, which is the most interesting bit of this module. local :: (Typeable e, Member (Reader e) r) => (e -> e) -> Eff r a -> Eff r a local f m = do e <- f <$> ask let loop (Val x) = return x loop (E u) = interpose u loop (\(Reader k) -> loop (k e)) loop (admin m) So local starts by grabbing the view of the environment we’re interested in, e. From there we define our worker function which looks a lot like runState. The key difference is that instead of using handleRelay we use interpose to replace each Reader effect with the appropriate environment. Remember that interpose is not going to remove Reader from the set of effects, just update each Reader effect in the current computation. Finally, we simply rejigger the computation with admin and feed it to loop. In fact, this is very similar to how runReader works! runReader :: Typeable e => Eff (Reader e :> r) w -> e -> Eff r w runReader m e = loop (admin m) where loop (Val x) = return x loop (E u) = handleRelay u loop (\(Reader k) -> loop (k e)) Control.Eff.Lift Now between Control.Eff.Reader and Control.Eff.State I felt I had a pretty good handle on most of what I’d read in extensible-effects. There was just one remaining loose end: SetMember. Don’t remember what that was? It was a class in Data.OpenUnion1 that was conspicuously absent of detail or use. I finally found where it seemed to be used! In Control.Eff.Lift. First let’s poke at the exports of his module module Control.Eff.Lift( Lift (..) , lift , runLift ) where This module is designed to lift an arbitrary monad into the world of effects. There’s a caveat though, since monads aren’t necessarily commutative, the order in which we run them in is very important. Imagine for example the difference between IO (m a) and m (IO a). So to ensure that Eff can support lifted monads we have to do some evil things. First we must require that we never have to lifted monads and we always run the monad last. This is a little icky but it’s usefulness outweighs such ugliness. To ensure condition 1, we need SetMember. instance SetMember Lift (Lift m) (Lift m :> ()) So we define a new instance of SetMember. Basically this says that any Lift is a SetMember ... r iff Lift m is the last item in r. To ensure condition number two we define runLift with the more restrictive type runLift :: (Monad m, Typeable1 m) => Eff (Lift m :> ()) w -> m w We can now look into exactly how Lift is defined. data Lift m v = forall a. Lift (m a) (a -> v) So this Lift acts sort of like a “suspended bind”. We postpone actually binding the monad and simulate doing so with a continuation a -> v. We can define our one operation with Lift, lift. lift :: (Typeable1 m, SetMember Lift (Lift m) r) => m a -> Eff r a lift m = send (inj . Lift m) This works by suspending the rest of the program in a our faux binding to be unwrapped later in runLift. runLift :: (Monad m, Typeable1 m) => Eff (Lift m :> ()) w -> m w runLift m = loop (admin m) where loop (Val x) = return x loop (E u) = prjForce u $ \(Lift m' k) -> m' >>= loop . k The one interesting difference between this function and the rest of the run functions we’ve seen is that here we use prjForce. The reason for this is that we know that r is just Lift m :> (). This drastically simplifies the process and means all we’re essentially doing is transforming each Lift into >>=. That wraps up our tour of the module and with it, extensible-effects. Wrap Up This post turned out a lot longer than I’d expected, but I think it was worth it. We’ve gone through the coroutine/continuation based core of extensible-effects and walked through a few different examples of how to actually use them. If you’re still having some trouble putting the pieces together, the rest of extensible effects is a great collection of useful examples of building effects. I hope you had as much fun as I did with this one! Thanks to Erik Rantapaa a much longer post than I led him to believe /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'codeco'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); Please enable JavaScript to view the comments powered by Disqus. comments powered by Disqus 2014-07-15T00:00:00Z
Syndicate content