Functional Programming

Fucking Microsoft, fucking Windows

Planet Haskell - Sun, 11/23/2014 - 07:00
I have a new Lenovo laptop, which dual-boots Windows and Linux. (Huge thanks to the author of [these detailed instructions](https://askubuntu.com/questions/221835/installing-ubuntu-on-a-pre-installed-windows-8-64-bit-system-uefi-supported/228069#228069) and [this tool](http://sourceforge.net/projects/boot-repair/) that magically fixes whatever is messed up.) So I'm using Windows now for the first time in several years, and remembering why I stopped. I had forgotten, for a long time, what this was like. You hear a lot of propaganda about how a Linux user has to solve a lot of problems that the free development process didn't solve, they have to understand system administration, and when things go wrong you have to depend on free suppprt, the implication being that the free support is sketchy and incomplete. In the intervening years I had forgotten what a load of bullshit this is. When my Linux system was broken, or when the free support was sketch and incomplete, I would wonder if I wouldn't be better off using something else. A few months of dealing with Windows have reminded me that no, I would not be better off. When I first got the laptop, I had Windows booted, and I tried plugging in a VGA monitor for the first time. This is a universal, core function of the laptop, so one might expect them to have gotten it right a long time ago. Or maybe Lenovo tried it thirty minutes after the first laptop came from the factory, and discovered something wrong, and then worked with Microsoft to make sure the problem was fixed. Anyway one would expect to plug in the monitor, and have it just work; whereas under Linux you might expect to have to fiddle with `Xorg.conf`, or download `xrandr` and puzzle out ists man page or something like that. But no, it was just the opposite. When I plugged in the monitor under Linux, everything just worked instantly. When I plugged in the monitor under Windows, Windows recognized the external monitor, but the integrated monitor went black. I tried all the monitor and display settings; the integrated display was still black. So I called Lenovo support. What a mistake that was. The guy on the other end had me try to same set of display settings I had just tried, tried disabling and reinstalling some of the dvice drivers, and then threw up his hand and said I had a bad motherboard and I should return the laptop for replacement. I said I was sure he was mistaken. I knew the motherboard was fine because the VGA monitor worked fine under Linux. (I didn't say this on the phone, because in sometimes the word “Linux” is been like waving a red cape in front of a bull: oh, you use Linux, that must be the cause of your problem.) Then at the end of the call: “Are you satisfied with the results of your call today.” Uh, what? You didn't solve my problem and your suggested solution is obvious nonsense, so, um, no, I'm not satisfied with the results of my call today. Anyway, that's the last time I'll make that mistake. I did some web searching and found a page that suggested some incantation in the boot settings that fixed the problem. So much for paid support; it was worse than useless. More recently, I took the laptop to Los Angeles, and when I arrived, the AC power adapter wasn't working; I would plug in the laptop and it wouldn't charge. Bot Linux and Windows reported "not plugged in". (Lenovo got rid of the LED that announces when AC power is connected, so you have to boot the machine to find out whether it thinks it is plugged in.) This had worked a few hours before, when I had it plugged in at the airport. I tested with a spare line cord and guessed that the power brick was faulty. Then I limped along for two days on reserve battery and a borrowed power brick until a replacement arrived. The replacement worked for a few hours in the office, but when I got back to the hotel, it failed in the same way. Over the next couple of hours I discovered that if I powered the machine down, pulled out and replaced the battery, and then booted it up, the AC power would work indefinitely under Linux, and would even work under Windows _until I logged in_, when it would stop working after a few seconds. Once it stopped working the only way to get it to work again was to power the machine down and pull the battery. So the problem was evidently a Windows software problem. And eventually I made the problem go away by booting with AC power and no battery, uninstalling the Windows battery device driver (yes, the battery as a device driver), shutting down, and booting with battery but no AC power. It has continued to work since. The original power brick, the one I thought had failed, is fine. I wouldn't have imagined that a Windows device driver problem could cause a failure of the AC charging, but it seems so. Had this kind of nonsense happened under Linux, I would have been annoyed but taken it in stride; Linux is written and maintained by volunteers, so you take what you can get. Microsoft has no such excuse. We paid Microsoft a hundred billion dollars for this shoddy shitware.

Within this instrument, resides the Universe

Planet Haskell - Sun, 11/23/2014 - 07:00
When opportunity permits, I have been trying to teach my ten-year-old daughter rudiments of algebra and group theory. Last night I posed this problem: Mary and Sue are sisters. Today, Mary is three times as old as Sue; in two years, she will be twice as old as Sue. How old are they now? I have tried to teach Ms. 10 that these problems have several phases. In the first phase you translate the problem into algebra, and then in the second phase you manipulate the symbols, almost mechanically, until the answer pops out as if by magic. There is a third phase, which is pedagogically and practically essential. This is to check that the solution is correct by translating the results back to the context of the original problem. It's surprising how often teachers neglect this step; it is as if a magician who had made a rabbit vanish from behind a screen then forgot to take away the screen to show the audience that the rabbit had vanished. Ms. 10 set up the equations, not as I would have done, but using four unknowns, to represent the two ages today and the two ages in the future: $$\begin{align} MT & = 3ST \\ MY & = 2SY \\ \end{align} $$ ( here is the name of a single variable, not a product of and ; the others should be understood similarly.) “Good so far,” I said, “but you have four unknowns and only two equations. You need to find two more relationships between the unknowns.” She thought a bit and then wrote down the other two relations: $$\begin{align} MY & = MT + 2 \\ SY & = ST + 2 \end{align} $$ I would have written two equations in two unknowns: $$\begin{align} M_T & = 3S_T\\ M_T+2 & = 2(S_T + 2) \end{align} $$ but one of the best things about mathematics is that there are many ways to solve each problem, and no method is privileged above any other except perhaps for reasons of practicality. Ms. 10's translation is different from what I would have done, and it requires more work in phase 2, but it is correct, and I am not going to tell her to do it my way. The method works both ways; this is one of its best features. If the problem can be solved by thinking of it as a problem in two unknowns, then it can also be solved by thinking of it as a problem in four or in eleven unknowns. You need to find more relationships, but they must exist and they can be found. Ms. 10 may eventually want to learn a technically easier way to do it, but to teach that right now would be what programmers call a premature optimization. If her formulation of the problem requires more symbol manipulation than what I would have done, that is all right; she needs practice manipulating the symbols anyway. She went ahead with the manipulations, reducing the system of four equations to three, then two and then one, solving the one equation to find the value of the single remaining unknown, and then substituting that value back to find the other unknowns. One nice thing about these simple problems is that when the solution is correct you can see it at a glance: Mary is six years old and Sue is two, and in two years they will be eight and four. Ms. 10 loves picking values for the unknowns ahead of time, writing down a random set of relations among those values, and then working the method and seeing the correct answer pop out. I remember being endlessly delighted by almost the same thing when I was a little older than her. In The Dying Earth Jack Vance writes of a wizard who travels to an alternate universe to learn from the master “the secret of renewed youth, many spells of the ancients, and a strange abstract lore that Pandelume termed ‘Mathematics.’” “I find herein a wonderful beauty,” he told Pandelume. “This is no science, this is art, where equations fall away to elements like resolving chords, and where always prevails a symmetry either explicit or multiplex, but always of a crystalline serenity.” After Ms. 10 had solved this problem, I asked if she was game for something a little weird, and she said she was, so I asked her: Mary and Sue are sisters. Today, Mary is three times as old as Sue; in two years, they will be the same age. How old are they now? “WHAAAAAT?” she said. She has a good number sense, and immediately saw that this was a strange set of conditions. (If they aren't the same age now, how can they be the same age in two years?) She asked me what would happen. I said (truthfully) that I wasn't sure, and suggested she work through it to find out. So she set up the equations as before and worked out the solution, which is obvious once you see it: Both girls are zero years old today, and zero is three times as old as zero. Ms. 10 was thrilled and delighted, and shared her discovery with her mother and her aunt. There are some powerful lessons here. One is that the method works even when the conditions seem to make no sense; often the results pop out just the same, and can sometimes make sense of problems that seem ill-posed or impossible. Once you have set up the equations, you can just push the symbols around and the answer will emerge, like a familiar building as you approach it through a fog. But another lesson, only hinted at so far, is that mathematics has its own way of understanding things, and this is not always the way that humans understand them. Goethe famously said that whatever you say to mathematicians, they immediately translate it into their own language and then it is something different; I think this is exactly what he meant. In this case it is not too much of a stretch to agree that Mary is three times as old as Sue when they are both zero years old. But in the future I plan to give Ms. 10 a problem that requires Mary and Sue to have negative ages—say that Mary is twice as old as Sue today, but in three years Sue will be twice as old—to demonstrate that the answer that pops out may not be a reasonable one, or that the original translation into mathematics can lose essential features of the original problem. The solution that says that is mathematically irreproachable, and if the original problem had been posed as “Find two numbers such that…” it would be perfectly correct. But translated back to the original context of a problem that asks about the ages of two sisters, the solution is unacceptable. This is the point of the joke about the spherical cow.

GHC Weekly News - 2014/11/21

Planet Haskell - Sun, 11/23/2014 - 07:00
Hi *, To get things back on track, we have a short post following up the earlier one this week. It's been busy today so I'll keep it short: The STABLE freeze Austin announced two weeks ago is happening now, although at this point a few things we wanted to ship are just 98% ready. So it may wait until Monday. HEAD has been very busy the past two days as many things are now trying to merge as closely to the window as possible. Some notes follow. Gergo Erdi merged the implementation of pattern synonym type signatures: ​https://www.haskell.org/pipermail/ghc-devs/2014-November/007369.html HEAD now has support for using the 'deriving' clause for arbitrary classes (see #5462). HEAD now has a new flag -fwarn-missing-exported-sigs, which fixes #2526. See ​https://phabricator.haskell.org/D482 HEAD now has 64bit iOS and SMP support for ARM64, thanks to Luke Iannini. See #7942. HEAD no longer ships haskell98, haskell2010, old-locale or old-time, per our decision to drop support for haskell98 and haskell2010. GHC 7.10 compatible releases of old-locale and old-time have been released on hackage. See ​https://www.haskell.org/pipermail/ghc-devs/2014-November/007357.html and ​https://www.haskell.org/pipermail/ghc-devs/2014-November/007383.html base now exports a new module for Natural numbers called Numeric.Natural following Herbert Valerio Riedel's recent proposal. HEAD should finally be compatible with LLVM 3.5, AKA #9142. The patch from Ben Gamari is at ​https://phabricator.haskell.org/D155 Your author has been busy and delayed due to some bad travel experiences the past week, so the 7.8.4 RC1 hasn't landed this past week. Hopefully it will be out by the end of this week still. Since the last update was only a few days ago, you'd think we haven't closed a lot of tickets, but we have! Thomas Miedema has been very very persistent about closing tickets and cleaning them up, which is greatly appreciated: #9810, #8324, #8310, #9396, #9626, #9776, #9807, #9698, #7942, #9703, #8584, #8968, #8174, #9812, #9209, #9220, #9151, #9201, #9318, #9109, #9126, #8406, #8102, #8093, #8085, #8068, #8094, #9590, #9368, #2526, #9569, #8149, #9815, #5462, #9647, #8568, #9293, #7484, #1476, #9824, #9628, #7942 2014-11-22T00:33:48Z thoughtpolice

Bidirectional Type Checkers for λ→ and λΠ

Planet Haskell - Sun, 11/23/2014 - 07:00
Posted on November 22, 2014 Tags: haskell, types, compilers This week I learned that my clever trick for writing a type checker actually has a proper name: bidirectional type checking. In this post I’ll explain what exactly that is and we’ll use it to write a few fun type checkers. First of all, let’s talk about one of the fundamental conflicts when designing a statically typed language: how much information need we demand from the user? Clearly we can go too far in either direction. Even people who are supposedly against type inference support at least some inference. I’m not aware of a language that requires you to write something like my_function((my_var : int) + (1 : int) : int) : string Clearly inferring the types of some expressions are necessary. On the other hand, if we leave out all type annotations then it becomes a lot harder for a human reader to figure out what’s going on! I at least, need to see signatures for top level functions or I become grumpy. So inside a type checker we always have two sort of processes I know this must have the type T, I’ll check to make sure this is the case I have no idea what the type of this expression is, I’ll examine the expression to figure it out In a bidirectional type checker, we acknowledge these two phases by explicitly separating the type checker into two functions inferType :: Expr -> Maybe Type checkType :: Type -> Expr -> Maybe () Our type checker thus has two directions, one where we use the type to validate the expression (the type flows in) or we synthesize the type form the expression (the type flows out). That’s all that this is! It turns out that a technique like this is surprisingly robust. It handles everything from subtyping to simple dependent types! To see how this actually plays out I think it’d be best to just dive in and do something with it. Laying Out Our Language Now when we’re building a bidirectional type checker we really want our AST to explicitly indicate inferrable vs checkable types. Clearly the parser might not care so much about this distinction, but prior to type checking it’s helpful to create this polarized tree. For a simple language you can imagine data Ty = Bool | Arr Ty Ty deriving(Eq, Show) data IExpr = Var Int | App IExpr CExpr | Annot CExpr Ty | If CExpr IExpr IExpr | ETrue | EFalse data CExpr = Lam CExpr | CI IExpr This is just simply typed lambda calculus with booleans. We’re using DeBruijn indices so we need not specify a variable for Lam. The IExpr type is for expressions we can infer types for, while CExpr is for types we can check. Much this isn’t checking, we can always infer the types of variables, inferring the types of lambdas is hard, etc. Something worth noting is CI. For any inferrable type, we can make it checkable by inferring a type and checking that it’s equal to what we expected. This is actually how Haskell works, GHC is just inferring type without bothering with your signature and then just checks you were right in the first place! Now that we’ve separated out our expressions, we can easily define our type checker. type Env = [Ty] (?!) :: [a] -> Int -> Maybe a xs ?! i = if i < length xs then Just (xs !! i) else Nothing inferType :: Env -> IExpr -> Maybe Ty inferType env (Var i) = env ?! i inferType env (App l r) = case inferType env l of Just (Arr lTy rTy) -> checkType env r lTy >> return rTy _ -> Nothing inferType env (Annot e an) = checkType env e an >> return an inferType _ ETrue = return Bool inferType _ EFalse = return Bool inferType env (If i t e) = do checkType env i Bool lTy <- inferType env t rTy <- inferType env e guard (lTy == rTy) return lTy checkType :: Env -> CExpr -> Ty -> Maybe () checkType env (Lam ce) (Arr l r) = checkType (l : env) ce r checkType env (CI e) t = inferType env e >>= guard . (t ==) checkType _ _ _ = Nothing So our type checker doesn’t have many surprises in it. The environment is easy to maintain since DeBruijn indices are easily stored in a list. Now that we’ve seen how a bidirectional type checker more or less works, let’s kick it up a notch. Type Checking Dependent Types Type checking a simple dependently typed language is actually not nearly as bad as you’d expect. The first thing to realize is that since dependent types have only one syntactic category. We maintain the distinction between inferrable and checkable values, resulting in data IExpr = Var Int | App IExpr CExpr | Annot CExpr CExpr | ETrue | EFalse | Bool | Star -- New stuff starts here | Pi CExpr CExpr | Const String | Free Int deriving (Eq, Show, Ord) data CExpr = Lam CExpr | CI IExpr deriving (Eq, Show, Ord) So you can see we’ve added 4 new expressions, all inferrable. Star is just the kind of types as it is in Haskell. Pi is the dependent function type, it’s like Arr, except the return type can depend on the supplied value. For example, you can imagine a type like replicate :: (n : Int) -> a -> List n a Which says something like “give me an integer n and a value and I’ll give you back a list of length n”. Interestingly, we’ve introduce constants. These are necessary simply because without them this language is unbelievable boring. Constants would be defined in the environment and they represent constant, irreducible terms. You should think of them almost like constructors in Haskell. For example, one can imagine that 3 constants Nat :: Star Zero :: Nat Succ :: (_ : Nat) -> Nat Which serve to define the natural numbers. Last but not least, we’ve added “free variables” as an explicit Now an important piece of a type checker is comparing types for equality, in STLC, equivalent types are syntactically equal so that was solved with deriving Eq. Here we need a bit more subtlety. Indeed, now we need to check arbitrary expressions for equality! This is hard. We’ll reduce things as much as possible and then just check syntactic equality. This means that if True then a else b would equal a as we’d hope, but \x -> if x then a else a wouldn’t. Now in order to facilitate this check we’ll define a type for fully reduced expressions. Since we’re only interested in checking equality on these terms we can toss the inferrable vs checkable division out the window. data VConst = CAp VConst Val | CVar String | CFree Int data Val = VStar | VBool | VTrue | VFalse | VConst VConst | VArr Val Val | VPi Val (Val -> Val) | VLam (Val -> Val) | VGen Int Now since we have constants we can have chains of application that we can’t reduce, that’s what VConst is. Notice that this handles the case of just having a constant nicely. The value dichotomy uses a nice trick from the “Simple Easy!” paper, we use HOAS to have functions that reduce themselves when applied. The downside of this is that we need VGen to peek inside the now opaque VLam and VPi. The idea is we’ll generate a unique Int and apply the functions to VGen i. Now in order to conveniently generate these fresh integers I used monad-gen (it’s not self promotion if it’s useful :). Equality checking comes to -- *Whistle and fidget with hands* instance Enum Val where toEnum = VGen fromEnum _ = error "You're a bad person." eqTerm :: Val -> Val -> Bool eqTerm l r = runGen $ go l r where go VStar VStar = return True go VBool VBool = return True go VTrue VTrue = return True go VFalse VFalse = return True go (VArr f a) (VArr f' a') = (&&) <$> go f f' <*> go a a' go (VLam f) (VLam g) = gen >>= \v -> go (f v) (g v) go (VPi f) (VPi g) = gen >>= \v -> go (f v) (g v) go (VGen i) (VGen j) = return (i == j) go (VConst c) (VConst c') = case (c, c') of (CVar v, CVar v') -> return (v == v') (CAp f a, CAp f' a') -> (&&) <$> go (VConst f) (VConst f') <*> go a a' _ -> return False go _ _ = return False Basically we just recurse and return true or false at the leaves. Now that we know how to check equality of values, we actually need to map terms into those values. This involves basically writing a little interpreter. inf :: [Val] -> IExpr -> Val inf _ ETrue = VTrue inf _ EFalse = VFalse inf _ Bool = VBool inf _ Star = VStar inf _ (Free i) = VConst (CFree i) inf _ (Const s) = VConst (CVar s) inf env (Annot e _) = cnf env e inf env (Var i) = env !! i inf env (Pi l r) = VPi (cnf env l) (\v -> cnf (v : env) r) inf env (App l r) = case inf env l of VLam f -> f (cnf env r) VConst c -> VConst . CAp c $ cnf env r _ -> error "Impossible: evaluated ill-typed expression" cnf :: [Val] -> CExpr -> Val cnf env (CI e) = inf env e cnf env (Lam c) = VLam $ \v -> cnf (v : env) c The interesting cases are for Lam, Pi, and App. For App we actually do reductions wherever we can, otherwise we know that we’ve just got a constant so we slap that on the front. Lam and Pi are basically the same, they wrap the evaluation of the body in a function and evaluate it based on whatever is fed in. This is critical, otherwise App’s reductions get much more complicated. We need one final thing. You may have noticed that all Val’s are closed, there’s no free DeBruijn variables. This means that when we go under a binder we can’t type check open terms since we’re representing types as values and the term we’re checking shares variables with its type. This means that our type checker when it goes under a binder is going to substitute the now free variable for a fresh Free i. Frankly, this kinda sucks. I poked about for a better solution but this is what “Simple Easy!” does too.. To do these substitutions we have ibind :: Int -> IExpr -> IExpr -> IExpr ibind i e (Var j) | i == j = e ibind i e (App l r) = App (ibind i e l) (cbind i e r) ibind i e (Annot l r) = Annot (cbind i e l) (cbind i e r) ibind i e (Pi l r) = Pi (cbind i e l) (cbind i e r) ibind _ _ e = e -- Non recursive cases cbind :: Int -> IExpr -> CExpr -> CExpr cbind i e (Lam b) = Lam (cbind (i + 1) e b) cbind i e (CI c) = CI (ibind i e c) This was a bit more work than I anticipated, but now we’re ready to actually write the type checker! Since we’re doing bidirectional type checking, we’re once again going to have two functions, inferType and checkType. Our environments is now a record data Env = Env { localVar :: M.Map Int Val , constant :: M.Map String Val } The inferring stage is mostly the same inferType :: Env -> IExpr -> GenT Int Maybe Val inferType _ (Var _) = lift Nothing -- The term is open inferType (Env _ m) (Const s) = lift $ M.lookup s m inferType (Env m _) (Free i) = lift $ M.lookup i m inferType _ ETrue = return VBool inferType _ EFalse = return VBool inferType _ Bool = return VStar inferType _ Star = return VStar inferType env (Annot e ty) = do checkType env ty VStar let v = cnf [] ty checkType env e v >> return v inferType env (App f a) = do ty <- inferType env f case ty of VPi aTy body -> do checkType env a aTy return (body $ cnf [] a) _ -> lift Nothing inferType env (Pi ty body) = do checkType env ty VStar i <- gen let v = cnf [] ty env' = env{locals = M.insert i v (locals env)} checkType env' (cbind 0 (Free i) body) VStar return VStar The biggest difference is that now we have to compute some types on the fly. For example in Annot we check that we are in fact annotating with a type, then we reduce it to a value. This order is critical! Remember that cnf requires well typed terms. Beyond this there are two interesting cases, there’s App which nicely illustrates what a pi type means and Pi which demonstrates how to deal with a binder. For App we start in the same way, we grab the (function) type of the function. We can then check that the argument has the right type. To produce the output type however, we have to normalize the argument as far as we can and then feed it to body which computes the return type. Remember that if there’s some free variable in a then it’ll just be represented as VConst (CFree ...). Pi checks that we’re quantifying over a type first off. From there it generates a fresh free variable and updates the environment before recursing. We use cbind to replace all occurrences of the now unbound variable for an explicit Free. checkType is pretty trivial after this. Lam is almost identical to Pi and CI is just eqTerm. checkType :: Env -> CExpr -> Val -> GenT Int Maybe () checkType env (CI e) v = inferType env e >>= guard . eqTerm v checkType env (Lam ce) (VPi argTy body) = do i <- gen let ce' = cbind 0 (Free i) ce env' = env{locals = M.insert i argTy (locals env)} checkType env' ce' (body $ VConst (CFree i)) checkType _ _ _ = lift Nothing And that’s it! Wrap Up So let’s circle back to where we started: bidirectional type checking! Hopefully we’ve seen how structuring a type checker around these two core functions yields something quite pleasant. What makes this really interesting though is how well it scales. You can use this style type checker to handle subtyping, [dependent] pattern matching, heaps and tons of interesting features. At 400 lines though, I think I’ll stop here :) var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); Please enable JavaScript to view the comments powered by Disqus. comments powered by Disqus 2014-11-22T00:00:00Z

Bright Club

Planet Haskell - Sun, 11/23/2014 - 07:00
Tuesday 25 November at The Stand Comedy Club, I will be part of the line up at Bright Club, speaking on the subject of 'Turingery'. Bright Club is stand-up by academics---we are trained professionals; don't try this at home! Doors open 7:30pm, show starts 8:30pm. The Stand is at 5 York Place, Edinburgh, EH1 3EB. Tickets £5 at the door or online.

A Visual Introduction to DSP for SDR (video)

Planet Haskell - Sun, 11/23/2014 - 07:00
I have really failed to get around to blogging what I've been doing lately, which is all software-defined radio. Let's start fixing that, in reverse order. Yesterday, I went to a Bay Area SDR meetup, “Cyberspectrum” organized by Balint Seeber and gave a presentation of visual representations of digital signals and DSP operations. It was very well received. This video is a recording of the entire event, with my talk starting at 39:35. 2014-11-20T18:41:06Z Kevin Reid (kpreid) kpreid@switchb.org

Outages and improvements...

Planet Haskell - Sun, 11/23/2014 - 07:00
This past week, the primary Haskell.org server hosting the wiki and mailing systems went down due to RAID failures in its underlying hardware. Thanks to some (real) luck and hard work, we've managed to recover and move off the server to a new system, avoided some faulty hardware, and hopefully got some improved reliability. The rundown on what happened Before we started using Rackspace, eveyrthing existed on a single machine hosted by Hetzner, in Germany. This machine was named rock and ran several VMs with IPs allocated to them, which ran Hackage, Mediawiki (AKA www) and ghc.haskell.org on top of KVM. This server had a RAID1 setup between two drives. These drives were partitioned to have a primary 1.7TB partition for most of the data that was striped, and another small partition also striped. We had degredation on both partitions and essentially lost one drive completely. This is why the system had been so slow the past week and grinding to a halt so often. A quick move We began to investigate this when the drives became unable to fairly service IO almost at all. We found something really really bad: we hadn't had one of the drives in nearly two weeks! We had neglected to install SMART and RAID monitoring in our Nagios setup. An enormous and nearly disastrous blunder. And we didn't even look at our SFTP access for backups that Hetzner provided, but it was close to out of space last we checked. But we seemed to be OK in the read-only workload. Overall, it was some amateur mistakes that almost cost us. We'd been so focused on moving things out, we didn't even take the time to check the server when we moved Hackage a few weeks prior and making sure the existing things kept working. We'd planned to do this move earlier, but it obviously couldn't wait. So Davean spent most of his Tuesday migrating all the services over to Rackspace on a new VM, and we migrated the data to our shared MySQL server, and got the mail relay running again. The whole process took close to 12 hours or so I'd say, but overall went quite smoothly without any kind of read errors or problems on the remaining drive, and restored service. Improvements In the midst of anticipating this move, I migrated a lot of download data - specifically GHC and the Haskell Platform - to a new server, https://downloads.haskell.org. It's powered by a brand new CDN via Fastly, and provides the Platform, and GHC downloads and documentation. Don't worry: redirects for the main server are still in place. We've also finally fixed a few bugs stopping us from deploying the new main website. We've deployed https://try.haskell.org now as an official instance, as well as fixed https://new-www.haskell.org to use it. We've also moved most the MediaWiki data to our consolidated MariaDB server. However, because we were so hastily moving data, we didn't put the new wiki in the same DC as the database! As a result, we have somewhat of a performance loss. In the mean time, we've deployed Memcached to try and help offset this a bit. We'll be moving things again soon, but it will hopefully be faster and much less problematic. We've also taken the time to archive a lot of our data from Rock onto DreamHost, who have joined us and given us free S3-compatible object storage! We'll be moving a lot of data in there soon. Future improvements We've got several improvements we're going to deploy in the near future: We're obviously going to overhaul our Nagios setup, which is clearly inadequate and now out of date with all the recent changes. We'll be moving Fastly in front of Hackage soon, which should hopefully dramatically increase download speeds for all packages, and save us a lot of bandwidth in the long run. Beyond memcached, our MediaWiki instance will be using Fastly as a CDN, as it's one of the most bandwidth-heavy parts of the site besides Hackage. We'll be working towards integrating purge support from Mediawiki with the upstream CDN. We'll also be moving our new wiki server to a separate DC so it can talk to the MySQL server more efficiently. We're planning on doing some other tech-stack upgrades: a move to nginx across the remaining servers using apache, upgrading MediaWiki, etc. We'll be working towards better backups for all our servers, possibly using a solution like Attic and S3 synchronization. And some other stuff I'll keep secret. Conclusion Overall, the past few weeks have seen several improvements but a lot of problems, and some horrible mistakes that almost cost us weeks of data if we hadn't been careful. Most of our code, repositories, configurations, and critical data was mirrored at the time - but we still got struck where we were vulnerable, on old hardware we had issues with before. For that, I take responsibility on behalf of the administration team (as the de-facto lead, it seems) for the outage this past week, which cost us nearly a day of email and main site availability. The past week hasn't been great, but here's to hoping the next one will be better. Upwards and onwards.

Stackage server: new features and open source

Planet Haskell - Sun, 11/23/2014 - 07:00
Open sourceWe've been working on Stackage server for a while and now that the code has stabilized it's ready to be open source. You can fork it on Github! We're a responsive team, used to bringing pull requests forward and getting them deployed.Since the last update we added a bunch of things. Here's a rundown:Clarifying improvementsRecommended snapshots: The home page now lists recommended snapshots for GHC 7.8, GHC 7.8 Haskell Platform flavour, and GHC 7.6. Makes it a little more straight-forward where to start.Snapshots: The snapshots page has been cleaned up a bit to cleanly separate times when snapshots are uploaded. Just an aesthetic improvement.We also reorganized the Preparing your system to use Stackage wiki page to be more straight-forward to get started.PackagesWe now list all packages from Hackage on the packages page. A subset of these are included in Stackage snapshots. Hitting those links takes you to our new page package. Examples:basepipesscientificsnaphaskell-src-extshlintOn this page we display:Metadata for the most recently uploaded version of that package:Name, version, license, authorship, maintainerWe also display the dependencies of that package. Notice that there are no version constraints. That information is stored in the snapshot itself.Documentation versions: each snapshot that contains a version of this package will be listed along with documentation for that version.READMEWe support showing a README from your package (see e.g. hlint example above), which is a really nice way to introduce people to your package. You can use the same one that you use for your Github README.md, just include it in your .cabal file with:extra-source-files: README.mdIf your filename is just README it'll be included as plain text. With the .md extension it will be rendered as markdown.Please do go ahead and include your README.md in your .cabal file. If you don't have one, please write one and make it helpful and descriptive for any newbies to your package.If no README file is found it falls back to the package description, which is displayed as plain text. As is well-known in the community at large, writing descriptive pros in Haddock syntax is not pleasant, whereas everyone is pretty much writing their READMEs in markdown anyway thanks to Github.CHANGELOGs are also displayed as markdown if the file extension is .md, otherwise it's treated as plain text.Specifying remote-repo in sandboxesAn issue with using Stackage in the past was that you had to either put your remote-repo field in your global cabal config, or setup an hsenv. Or by using a cabal.config with constraints in it. Now the feature has been merged to be able to specify remote-repo in the cabal.config file in your project root. Once this is released you'll be able to use a Stackage snapshot within a sandbox and keep the cabal.config file in your source repository.Additional package page featuresAdditional features include:Tagging: tagging a package is easy: you just click the + button, type something and hit return.We try to keep the tags of a simple format (a slug), so if you type characters that shouldn't be in there, it'll remove them and then prompt you to confirm.Related: if you click a tag it will take you to a page of all packages tagged with that, e.g. parsing.Finally, there is a list of all tags. Currently it's rather small because that's just what I populated myself.Likes: just upvote a package if you like it. Hit the thumbs up icon.Comments: comments are provided by Disqus. It's very easy to comment with Disqus, you can use your normal account.If you're interested in a package, or are an author, you can hit the Subscribe link displayed at the bottom of the page to subscribe to that discussion to get updates of future comments.Additional to the other metadata, like Github, we make use of the files in your package, so we will also display:SummaryStackage just got easier to use:The site is clearer now.The wiki guide is easier.Using Stackage from a sandbox will soon be very easy.You can browse documentation of your snapshot on Stackage via the snapshot (e.g. here).Or you can start from the package list and view an individual package, vote and tag it.It's open source!

Lucid: templating DSL for HTML

Planet Haskell - Sun, 11/23/2014 - 07:00
I’m not big on custom templating languages, for reasons I’ll write about another time. I prefer EDSLs. I preferred the xhtml package back when that was what everybody used. It looked like this: header << thetitle << "Page title" thediv noHtml ! [theclass "logo"] << "…" thediv noHtml ! [identifier "login"] Pretty line-noisy to read, write and hard to edit in a reasonable manner. Later, blaze-html became the new goto HTML writing library. It improved upon the XHTML package by being faster and having a convenient monad instance. It looks like this: page1 = html $ do head $ do title "Introduction page." link ! rel "stylesheet" ! type_ "text/css" ! href "screen.css" body $ do div ! id "header" $ "Syntax" p "This is an example of BlazeMarkup syntax." ul $ mapM_ (li . toMarkup . show) [1, 2, 3] Much easier to read, write and edit thanks to the monad instance. However, after several years of using that, I’ve come to write my own. I’ll cover the infelicities about Blaze and then discuss my alternative approach. Reading back through what I’ve written below, it could be read as a bit attacky, and some of the issues are less philosophical and more incidental. I think of it more that the work on writing HTML in a DSL is incomplete and to some degree people somewhat gave up on doing it more conveniently at some point. So I’m re-igniting that. The combination of having a need to write a few HTML reports and recent discussions about Blaze made me realise it was time for me to come at this problem a-fresh with my own tastes in mind. I also haven’t used my own approach much, other than porting some trivial apps to it. Blaze Names that conflict with base The first problem is that Blaze exports many names which conflict with base. Examples: div, id, head, map The obvious problem with this is that you either have to qualify any use of those names, which means you have to qualify Blaze, and end up with something inconsistent like this: H.div ! A.id "logo" $ "…" Where H and A come from importing the element and attribute modules like this: import qualified Text.Blaze.Html5 as H import qualified Text.Blaze.Html5.Attributes as A Or you don’t import Prelude and only import Blaze, but then you can’t do a simple map without qualification. You might’ve noticed in the old xhtml package that thediv and identifier are used instead. The problem with using different names from the actual things they refer to is that they’re hard to learn and remember, both for regular Haskellers and newbies coming to edit your templates. Names that are keywords This is a common problem in DSLs, too. In Blaze the problem is: class or type (perhaps others I don’t recall). Blaze solves it with: class_ or type_ Again, the problem with this is that it is inconsistent with the other naming conventions. It’s another exception to the rule that you have to remember and makes the code look bad. Conflicting attribute and element names There are also names which are used for both attributes and elements. Examples are style and map. That means you can’t write: H.head $ style "body { background: red; }" body $ p ! style "foo" $ … You end up writing: H.head $ H.style "body { background: red; }" body $ p ! A.style "foo" $ … Inconsistency is difficult and ugly What the above problems amount to is ending up with code like this: body $ H.div ! A.id "logo" ! class_ "left" ! hidden $ "Content" At this point users of Blaze give up with second-guessing every markup term they write and decide it’s more consistent to qualify everything: H.body $ H.div ! A.id "logo" ! A.class_ "left" ! A.hidden $ "Content" Or, taken from some real code online: H.input H.! A.type_ "checkbox" H.! A.checked True H.! A.readonly "true" This ends up being too much. Inconvenient to type, ugly to read, and one more step removed from the HTML we’re supposed to be generating. The Monad instance isn’t The monad instance was originally conceived as a handy way to write HTML nicely without having to use <> or lists of lists and other less wieldy syntax. In the end the monad ended up being defined like this: instance Monad MarkupM where return _ = Empty {-# INLINE return #-} (>>) = Append {-# INLINE (>>) #-} h1 >>= f = h1 >> f (error "Text.Blaze.Internal.MarkupM: invalid use of monadic bind") {-# INLINE (>>=) #-} And has been for some years. Let’s take a trivial example of why this is not good. You render some HTML and while doing so build a result to be used later: do xs <- foldM (\c i -> …) mempty ys mapM_ dd xs Uh-oh: *** Exception: Text.Blaze.Internal.MarkupM: invalid use of monadic bind The overloaded strings instance is bad The previous point leads onto this next point, which is that due to this phantomesque monad type, the instance is like this: instance IsString (MarkupM a) where fromString = Content . fromString {-# INLINE fromString #-} How can it make this value? It cannot. If you want to go ahead and extract that `a’, you get: *** Exception: Text.Blaze.Internal.MarkupM: invalid use of monadic bind Additionally, this instance is too liberal. You end up getting this warning: A do-notation statement discarded a result of type GHC.Prim.Any Suppress this warning by saying _ <- "Example" or by using the flag -fno-warn-unused-do-bind So you end up having to write in practice (again, taken from a real Blaze codebase by one of the authors): void "Hello!" Which pretty much negates the point of using IsString in the first-place. Alternatively, you use -fno-warn-unused-do-bind in your module. Working with attributes is awkward The ! syntax seems pretty convenient from superficial inspection: link ! rel "stylesheet" ! type_ "text/css" ! href "screen.css" But in practice it means you always have the same combination: div ! H.class_ "logo" $ "…" Which I find—personally speaking—a bit distasteful to read, it’s not far from what we saw in the old xhtml package: thediv ! [theclass "logo"] << "…" Did we really save that much in the attribute department? Operators are evil. But mostly presents an editing challenge. Operators like this make it tricky to navigate, format in a regular way and do code transformations on. All Haskell code has operators, so this is a general problem. But if your DSL doesn’t actually need these operators, I consider this a smell. Attributes don’t compose You should be able to compose with. For example, let’s say you want to define a re-usable component with bootstrap: container inner = div ! class_ "container" $ inner Now you can use it to make a container. But consider now that you also want to add additional attributes to it later. You can do that with another call to with: container ! class_ "main" $ "zot" In Blaze this produces: λ> main "My content!" Browsers ignore the latter main, so the composition didn’t work. Ceremony is tiring Here’s the example from Blaze’s package, that’s introduced to users. import Prelude hiding (head, id, div) import Text.Blaze.Html4.Strict hiding (map) import Text.Blaze.Html4.Strict.Attributes hiding (title) import Text.Blaze.Renderer.Utf8 (renderMarkup) page1 :: Markup page1 = html $ do head $ do title "Introduction page." link ! rel "stylesheet" ! type_ "text/css" ! href "screen.css" body $ do div ! id "header" $ "Syntax" p "This is an example of BlazeMarkup syntax." ul $ mapM_ (li . toMarkup . show) [1, 2, 3] main = print (renderMarkup page1) Apart from the import backflips you have to do to resolve names properly, you have at least three imports to make just to render some HTML. Call me lazy, or stupid, but I never remember this deep hierarchy of modules and always have to look it up every single time. And I’ve been using Blaze for as long as the authors have. Transforming A smaller complaint is that it would sometimes be nice to transform over another monad. Simplest example is storing the read-only model information in a reader monad and then you don’t have to pass around a bunch of things as arguments to all your view functions. I’m a big fan of function arguments for explicit state, but not so much if it’s the same argument every time. No Show instance It would be nice if you could just write some markup in the REPL without having to import some other modules and wrap it all in a function just to see it. Lucid My new library, Lucid, attempts to solve most of these problems. Naming issues Firstly, all names which are representations of HTML terms are suffixed with an underscore _: p_, class_, table_, style_ No ifs or buts. All markup terms. That solves the following problems (from the issues described above): Names that conflict with base: div_, id_, head_, map_, etc. Names that are keywords: class_, type_, etc. Conflicting attribute and element names: solved by abstracting those names via a class. You can write style_ to mean either the element name or the attribute name. Inconsistency is difficult and ugly: there’s no inconsistency, all names are the same format. No import problems or qualification. Just write code without worrying about it. How it looks Plain text is written using the OverloadedStrings and ExtendedDefaultRules extensions, and is automatically escaped: λ> "123 < 456" :: Html () 123 < 456 Elements nest by function application: λ> table_ (tr_ (td_ (p_ "Hello, World!")))

Hello, World!

Elements are juxtaposed via monoidal append: λ> p_ "hello" <> p_ "sup"

hello

sup

Or monadic sequencing: λ> div_ (do p_ "hello"; p_ "sup")

hello

sup

Attributes are set using the with combinator: λ> with p_ [class_ "brand"] "Lucid Inc"

Lucid Inc

Conflicting attributes (like style_) work for attributes or elements: λ> html_ (head_ (style_ "body{background:red}") <> with body_ [style_ "color:white"] "Look ma, no qualification!") body{background:red} Look ma, no qualification! The Blaze example For comparison, here’s the Blaze example again: page1 = html $ do head $ do title "Introduction page." link ! rel "stylesheet" ! type_ "text/css" ! href "screen.css" body $ do div ! id "header" $ "Syntax" p "This is an example of BlazeMarkup syntax." ul $ mapM_ (li . toMarkup . show) [1, 2, 3] And the same thing in Lucid: page2 = html_ $ do head_ $ do title_ "Introduction page." with link_ [rel_ "stylesheet",type_ "text/css",href_ "screen.css"] body_ $ do with div_ [id_ "header"] "Syntax" p_ "This is an example of Lucid syntax." ul_ $ mapM_ (li_ . toHtml . show) [1,2,3] I’m not into operators like ($) and swung indentation like that, but I followed the same format. I’d write it in a more Lispy style and run my hindent tool on it: page1 = html_ (do head_ (do title_ "Introduction page." with link_ [rel_ "stylesheet" ,type_ "text/css" ,href_ "screen.css"]) body_ (do with div_ [id_ "header"] "Syntax" p_ "This is an example of Lucid syntax." ul_ (mapM_ (li_ . toHtml . show) [1,2,3]))) But that’s another discussion. It’s a real monad Normal monadic operations work properly: λ> (return "OK!" >>= p_)

OK!

It’s basically a writer monad. In fact, it’s also a monad transformer: λ> runReader (renderTextT (html_ (body_ (do name <- lift ask p_ (toHtml name))))) ("Chris" :: String) "

Chris

" Overloaded strings instance is fine The instance is constrained over the return type being (). So string literals can only be type HtmlT m (). λ> do "x" >> "y" :: Html () xy λ> do x <- "x"; toHtml (show x) x() Attributes Attributes are simply written as a list. That’s all. Easy to manipulate as a data structure, easy to write and edit, and automatically indent in a predictable way: λ> with p_ [id_ "person-name",class_ "attribute"] "Mary"

Mary

No custom operators are required. Just the with combinator. If you want to indent it, just indent it like normal function application: with p_ [id_ "person-name",class_ "attribute"] "Mary" And you’re done. Composing attributes You should be able to compose with. For example, let’s say you want to define a re-usable component with bootstrap: λ> let container_ = with div_ [class_ "container "] Now you can use it to make a container: λ> container_ "My content!" My content! But consider now that you also want to add additional attributes to it later. You can do that with another call to with: λ> with container_ [class_ "main"] "My content!" My content! Duplicate attributes are composed with normal monoidal append. Note that I added a space in my definition of container anticipating further extension later. Other attributes might not compose with spaces. Unceremonious Another part I made sure was right was lack of import nightmare. You just import Lucid and away you go: λ> import Lucid λ> p_ "OK!"

OK!

λ> p_ (span_ (strong_ "Woot!"))

Woot!

λ> renderBS (p_ (span_ (strong_ "Woot!"))) "

Woot!

" λ> renderToFile "/tmp/foo.html" (p_ (span_ (strong_ "Woot!"))) If I want to do more advanced stuff, it’s all available in Lucid. But by default it’s absolutely trivial to get going and output something. Speed Actually, despite having a trivial implementation, being a real monad and a monad transformer, it’s not far from Blaze. You can compare the benchmark reports here. A quick test of writing 38M of HTML to file yielded the same speed (about 1.5s) for both Lucid and Blaze. With such decent performance for very little work I’m already ready to start using it for real work. Summary So the point of this post was really to explain why another HTML DSL and I hope I did that well enough. The code is on Github. I pushed to Hackage but you can consider it beta for now. 2014-11-20T00:00:00Z

Four Haskell books

Planet Haskell - Sun, 11/23/2014 - 07:00
In 2013 and 2014 I read four Haskell books. In this post I’m making mini-reviews for each of them, chronologically ordered. Learn you a Haskell for Great Good Learn you a Haskell for Great Good is one of the best computer science book I had the occasion to read, and I even admit I was a little sad once I finished it! It exposes the main concepts of Haskell in a very pedagogic way and clear writing style. After reading the book, functors, applicative and monads become easily understandable. The only drawback is that this book is not enough to start with Haskell, there are no explanations of how to create a Haskell project using Cabal, use libraries, test your application etc. I think it’s okay for an introductory book but one should be informed. The book is suited for people without experience with Haskell or functional programming. Developing Web Applications with Haskell and Yesod I read Developing Web Applications with Haskell and Yesod to discover other ways of building web applications, as I mostly had experience with Clojure and Compojure. I enjoyed reading this book which is short but contains a lot, going from the frontend to the backend and explaining a lot of subjects: HTML templates, sessions, authentification, persistence with databases etc. You discover how a statically typed language can help you building a web application with more guarantees. The only drawback for me was that even after reading one book on Haskell, some of the type signatures of functions were still hard to understand. Some familiarity with monad transformers prior to reading this book may help. Real World Haskell The goal of Real World Haskell was to bring Haskell to a less academic audience by showing how Haskell could be used for “real world” applications, at a time where there weren’t so many Haskell books there. In this sense I think the book succeed. There are a lot of interesting subjects tackled in this book, like profiling and performance analysis but I did not really enjoy reading it. Either the examples were a bit boring or the writing style was too dry for me, in the end I had to fight to finish this very long book (~700 pages!). I nonetheless appreciate that this book exist and I may use it in the future as a reference. The book is a bit outdated and some code are not valid anymore ; it was not a problem for me since I didn’t try out the examples but this should be considered if you want to learn Haskell with it. Beginning Haskell Beginning Haskell is a paradoxical book. The truth is this book should not have been published because its edition and proof-reading are too bad. What do you think about a book where the first code sample is wrong? map listOfThings action instead of map action listOfThings on page 4, come one… Also the subtitle reads “a Project-Based approach” but this seems to be exaggerated since you will only get some code samples there and there… That being said, I really enjoyed reading this book! I don’t know if it’s a good introductory book but it’s surely is a correct second book on Haskell. It explains how to use Cabal, test an application, use popular libraries like Conduit, build parsers with attoparsec, building a Web Application with Scotty and Fay etc. It includes advanced topics like type programming (enforcing more constraints at the type-level) and even a small introduction to Idris. Conclusion: either burn it or read it. Conclusion There are more and more books on Haskell newly published or coming, this makes me happy as this broaden the Haskell community, shows that it is growing and makes it easier to learn the language. Reading is not enough to learn so at the moment I’m writing small Haskell projects to go from a beginner to intermediate level.

Announcing Shake 0.14

Planet Haskell - Sun, 11/23/2014 - 07:00
Summary: Shake 0.14 is now out. The *> operator became %>. If you are on Windows the FilePath operations work differently.I'm pleased to announce Shake 0.14, which has one set of incompatible changes, one set of deprecations and some new features.The *> operator and friends are now deprecatedThe *> operator has been the Shake operator for many years, but in GHC 7.10 the *> operator is exported by the Prelude, so causes name clashes. To fix that, I've added %> as an alias for *>. I expect to export *> for the foreseeable future, but you should use %> going forward, and will likely have to switch with GHC 7.10. All the Shake documentation has been updated. The &*> and |*> operators have been renamed &%> and |%> to remain consistent.Development.Shake.FilePath now exports System.FilePathPreviously the module Development.Shake.FilePath mostly exported System.FilePath.Posix (even on Windows), along with some additional functions, but modified normalise to remove /../ and made always call normalise. The reason is that if you pass non-normalised FilePath values to need Shake can end up with two names for one on-disk file and everything goes wrong, so it tried to avoid creating non-normalised paths.As of 0.14 I now fully normalise the values inside need, so there is no requirement for the arguments to need to be normalised already. This change allows the simplification of directly exporting System.FilePath and only adding to it. If you are not using Windows, the changes are:normalise doesn't eliminate /../, but normaliseEx does. no longer calls normalise. It turned out calling normalise is pretty harmful for FilePattern values, which leads to fairly easy-to-make but hard-to-debug issues.If you are using Windows, you'll notice all the operations now use \ instead of /, and properly cope with Windows-specific aspects like drives. The function toStandard (convert all separators to /) might be required in a few places. The reasons for this change are discussed in bug #193.New featuresThis release has lots of new features since 0.13. You can see a complete list in the change log, but the most user-visible ones are:Add getConfigKeys to Development.Shake.Config.Add withTempFile and withTempDir for the Action monad.Add -j to run with one thread per processor.Make |%> matching with simple files much faster.Use far less threads, with corresponding less stack usage.Add copyFileChanged.PlansI'm hoping the next release will be 1.0! 2014-11-19T21:27:00Z Neil Mitchell noreply@blogger.com

CITIZENFOUR showing in Edinburgh

Planet Haskell - Sun, 11/23/2014 - 07:00
CITIZENFOUR, Laura Poitras's documentary on Edward Snowden, sold out for its two showings in Edinburgh. The movie is rated five-stars by the Guardian ("Gripping"), and four-stars by the Independent ("Extraordinary"), the Financial Times ("True-life spy thriller"), the Observer ("Utterly engrossing"), and the Telegraph ("Everybody needs to see it").Kickstarter-like, OurScreen will arrange to show a film if enough people sign up to see it. I've scheduled a showing:Cameo Edinburgh, 12 noon, Tuesday 2 December 2015cost: £6.80 Book tickets from OurScreen. If 34 people sign up by Sunday 23 November, then the showing will go ahead.

Functors and Recursion

Planet Haskell - Sun, 11/23/2014 - 07:00
Posted on November 19, 2014 Tags: haskell One of the common pieces of folklore in the functional programming community is how one can cleanly formulate recursive types with category theory. Indeed, using a few simple notions we can build a coherent enough explanation to derive some concrete benefits. In this post I’ll outline how one thinks of recursive types and then we’ll discuss some of the practical ramifications of such thoughts. Precursor I’m assuming the reader is familiar with some basic notions from category theory. Specifically familiarity with the definitions of categories and functors. Let’s talk about endofunctors, which are functors whose domain and codomain are the same. spoiler: These are the ones we care about in Haskell. An interesting notion that comes from endofunctors is that of algebras. An algebra in this sense is a pair of an object C, and a map F C → C. Here F is called the “signature” and C is called the carrier. If you curious about why these funny terms, in abstract algebra we deal with algebras which are comprised of a set of distinguished elements, functions, and axioms called the signature. From there we look at sets (called carriers) which satisfy the specification. We can actually cleverly rearrange the specification for something like a group into an endofunctor! It’s out of scope for this post, but interesting if algebras your thing. Now we can in fact define a category for F-algebras. in such a category an object is α : F A → A and each arrow is a triplet. normal arrow f : A → B An F-algebra α : F A → A Another F-algebra β : F B → B So that f ∘ α = β ∘ F f. In picture form F f F A ———————————————–→ F B | | | | | α | β ↓ ↓ A —————————————————→ B f commutes. I generally elide the fact that we’re dealing with triplets and instead focus on the arrow, since that’s the interesting bit. Now that we’ve established F-algebras, we glance at one more thing. There’s one more concept we need, the notion of initial objects. An initial object is an… object, I in a category so that for any object C f I - - - - - - - - → C So that f is unique. Now what we’re interested in investigating is the initial object in the category of F-algebras. That’d mean that α F I ————————————————–→ I | | | | F λ | λ | ↓ ↓ F C —————————————————→ C Commutes only for a unique λ. A List is just an Initial Object in the Category of F-Algebras. What’s the problem? Now, remembering that we’re actually trying to understand recursive types, how can we fit the two together? We can think of recursive types as solutions to certain equations. In fact, our types are what are called the least fixed point solutions. Let’s say we’re looking at IntList. We can imagine it defined as data IntList = Cons Int IntList | Nil We can in fact, factor out the recursive call in Cons and get data IntList a = Cons Int a | Nil deriving Functor Now we can represent a list of length 3 as something like type ThreeList = IntList (IntList (IntList Void)) Which is all well and good, but we really want arbitrary length list. We want a solution to the equation that X = IntList X We can view such a type as a set {EmptyList, OneList, TwoList, ThreeList ... }. Now how can we actually go about saying this? Well we need to take a fixed point of the equation! This is easy enough in Haskell since Haskell’s type system is unsound. -- Somewhere, somehow, a domain theorist is crying. data FixedPoint f = Fix {unfix :: f (FixedPoint f)} Now we can regain our normal representation of lists with type List = FixedPoint IntList To see how this works out :: FixedPoint IntList -> [Int] out (Fix f) = case fmap out f of Nil -> [] Cons a b -> a : b in :: [Int] -> FixedPoint IntList in [] = Nil in (x : xs) = Fix (Cons x (in xs)) Now this transformation is interesting for one reason in particular, IntList is a functor. Because of this, we can formulate an F-algebra for IntList. type ListAlg a = IntList a -> a Now we consider what the initial object in this category would be. It’d be something I so that we have a function cata :: Listalg a -> (I -> a) -- Remember that I -> a is an arrow in F-Alg cata :: (List a -> a) -> I -> a cata :: (Either () (a, Int) -> a) -> I -> a cata :: (() -> a) -> ((a, Int) -> a) -> I -> a cata :: a -> (Int -> a -> a) -> I -> a cata :: (Int -> a -> a) -> a -> I -> a Now that looks sort of familiar, what’s the type of foldr again? foldr :: (a -> b -> b) -> b -> [a] -> a foldr :: (Int -> a -> a) -> a -> [Int] -> a So the arrow we get from the initiality of I is precisely the same as foldr! This leads us to believe that maybe the initial object for F-algebras in Haskell is just the least fixed point, just as [Int] is the least fixed point for IntList. To confirm this, let’s generalize a few of our definitions from before type Alg f a = f a -> a data Fix f = Fix {unfix :: f (Fix f)} type Init f = Alg f (Fix f) cata :: Functor f => Alg f a -> Fix f -> a cata f = f . fmap (cata f) . unfix Exercise, draw out the reduction tree for cata on lists Our suspicion is confirmed, the fixed point of an functor is indeed the initial object. Further more, we can easily show that initial objects are unique up to isomorphism (exercise!) so anything that can implement cata is isomorphic to the original, recursive definition we were interested in. When The Dust Settles Now that we’ve gone and determined a potentially interesting fact about recursive types, how can we use this knowledge? Well let’s start with a few things, first is that we can define a truly generic fold function now: fold :: Functor f => (f a -> a) -> Fix f -> a This delegates all the messy details of how one actually thinks about handling the “shape” of the container we’re folding across by relegating it to the collapsing function f a -> a. While this may seem like a small accomplishment, it does mean that we can build off it to create data type generic programs that can be fitted into our existing world. For example, what about mutual recursion. Fold captures the notion of recurring across one list in a rather slick way, however, recurring over two in lockstep involves a call to zip and other fun and games. How can we capture this with cata? We’d imagine that the folding functions for such a scenario would have the type f (a, b) -> a f (a, b) -> b From here we can build muto :: (f (a, b) -> a) -> (f (a, b) -> b) -> Fix f -> (a, b) muto f g = cata ((,) <$> f <*> g) Similarly we can build up oodles of combinators for dealing with folding all built on top of cata! That unfortunately sounds like a lot of work! We can shamelessly free-load of the hard work of others thanks to hackage though. In particular, the package recursion-schemes has built up a nice little library for dealing with initial algebras. There’s only one big twist between what we’ve laid out and what it does. One of the bigger stumbling blocks for our library was changing the nice recursive definition of a type into the functorfied version. Really it’s not realistic to write all your types this way. To help simplify the process recursion-schemes provides a type family called Base which takes a type and returns its functorfied version. We can imagine something like data instance Base [a] b = Cons a b | Nil This simplifies the process of actually using all these combinators we’re building. To use recursion-schemes, all you need to is define such an instance and write project :: t -> Base t t. After that it’s all kittens and recursion. Wrap Up So dear reader, where are we left? We’ve got a new interesting formulation of recursive types that yields some interesting results and power. There’s one interesting chunk we’ve neglected though: what does unfolding look like? It turns out there’s a good story for this as well, unfolding is the operation (anamorphism) defined by a terminal object in a category. A terminal object is the precise dual of an initial one. You can notice this all in recursion-schemes which features ana as well as cata. var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); Please enable JavaScript to view the comments powered by Disqus. comments powered by Disqus 2014-11-19T00:00:00Z

GHC Weekly News 2014/11/18

Planet Haskell - Sun, 11/23/2014 - 07:00
Hello *, Once more we have the GHC Weekly news! This one is a bit late due to Austin being in limbo unexpectedly for a few days last week. (The next one will of course come again on Friday to keep things straight.) With that out of the way, let's see what exactly is going on: The STABLE freeze is happening at the end of this week! That means if you have something you want to get in, try to get people aware of it! Austin (yours truly) has a backed up review queue it would seem, but hopes to clear a bunch of it out before then. Simon and Gergo started a whole bunch of discussion about type signatures for pattern synonyms. There is a surprising amount of content to talk about here for something that might seem simple: ​https://www.haskell.org/pipermail/ghc-devs/2014-November/007066.html Herbert Valerio Riedel has finally landed integer-gmp2, AKA ​Phab:D86, which implements a complete overhaul of the integer-gmp library. This library will be switched on by default in GHC 7.10.1, which means the integer-gmp library version will have a super-major bump (version 1.0.0.0). This is the beginning of a longer-term vision for more flexible Integer support in GHC, as described by Herbert on the design page: ​https://ghc.haskell.org/trac/ghc/wiki/Design/IntegerGmp2 This implementation also fixes a long standing pain point where GHC would hook GMP allocations to exist on the GHC heap. Now GMP is just called to like any FFI library. Jan Stolarek made a heads up to help out GHC newcomers: if you see a ticket that should be easy, please tag it with the newcomer keyword! This will let us have a live search of bugs that new developers can take over. (Incidentally, Joachim mentions this is the same thing Debian is doing in their bug tracker): ​https://www.haskell.org/pipermail/ghc-devs/2014-November/007313.html Merijn Verstraaten has put forward a proposal for more flexible literate style Haskell file extensions. There doesn't seem to be any major opposition, just some questions about the actual specification and some other ramifications: ​https://www.haskell.org/pipermail/ghc-devs/2014-November/007319.html Facundo Domínguez posed a question about CAFs in the GC, which Jost Berthold was fairly quick to reply to: ​https://www.haskell.org/pipermail/ghc-devs/2014-November/007353.html Adam Gundry, Eric Seidel, and Iavor Diatchki have grouped together to get a new, unexpected feature into 7.10: type checking plugins. Now, GHC will be able to load a regular Haskell package as a plugin during the compilation process. Iavor has a work-in-progress plugin that solves constraints for type-level natural numbers using a SMT solver. The code review from everyone was published in ​Phab:D489. Austin opened up a discussion about the future of the Haskell98 and Haskell2010 packages, and the unfortunate conclusion is it looks like we're going to drop them for 7.10. Austin has some rationale, and there was some followup in the mailing list thread too: ​https://www.haskell.org/pipermail/ghc-devs/2014-November/007357.html Closed tickets this week include: #9785, #9053, #9513, #9073, #9077, #9683, #9662, #9646, #9787, #8672, #9791, #9781, #9621, #9594, #9066, #9527, #8100, #9064, #9204, #9788, #9794, #9608, #9442, #9428, #9763, #9664, #8750, #9796, #9341, #9330, #9323, #9322, #9749, #7381, #8701, #9286, #9802, #9800, #9302, #9174, #9171, #9141, #9100, #9134, #8798, #8756, #8716, #8688, #8680, #8664, #8647, #9804, #8620, #9801, #8559, #8559, #8545, #8528, #8544, #8558 2014-11-18T17:35:37Z thoughtpolice

Darcs News #107

Planet Haskell - Sun, 11/23/2014 - 07:00
News and discussionsDarcs has received two grants from the Google Summer of Code program, as part of the umbrella organization Haskell.org. Alejandro Gadea will work on history reordering: http://alegdarcs.blogspot.com.ar/2014/05/google-summer-of-code-2014-darcs.htmlMarcio Diaz will work on the cache system: http://marcioodiaz.blogspot.com.ar/2014/04/gsoc-project-accepted.htmlhttp://marcioodiaz.blogspot.com.ar/2014/04/gsoc-progress-report-1-complete.htmlRepository cloning to remote ssh hosts has been present for years as darcs put. This feature has now a more efficient implementation: http://hub.darcs.net/darcs/darcs-reviewed/patch/20140425060647-5ef8fIssues resolved (11)issue851 Dan Frumininteractive mode for whatsnewhttp://bugs.darcs.net/issue851issue1066 Guillaume Hoffmannclone to ssh URL by locally cloning then copying by scphttp://bugs.darcs.net/issue1066issue1268 Guillaume Hoffmannenable to write darcs init xhttp://bugs.darcs.net/issue1268issue1416 Ale Gadeaput log files in tempdir instead of in working dir.http://bugs.darcs.net/issue1416issue1987 Marcio DiazGarbage collection for inventories and patches.http://bugs.darcs.net/issue1987issue2263 Ale GadeaOption --set-scripts-executable is not properly documentedhttp://bugs.darcs.net/issue2263issue2345 Dan Fruminsolution using cabal's checkForeignDepshttp://bugs.darcs.net/issue2345issue2357 Dan Fruminswitching to regex-compat-tdfa for unicode supporthttp://bugs.darcs.net/issue2357issue2365 Guillaume Hoffmanncorrectly copy pristine in no-working-dir cloneshttp://bugs.darcs.net/issue2365issue2367 Guillaume Hoffmannrename amend-record to amend, make --unrecord more visiblehttp://bugs.darcs.net/issue2367issue2379 Guillaume Hoffmannonly use packs to copy pristine when up-to-datehttp://bugs.darcs.net/issue2379Patches applied (41)See darcs wiki entry for details.

Darcs News #109

Planet Haskell - Sun, 11/23/2014 - 07:00
News and discussionsWe are in the feature freeze period of darcs 2.10:http://lists.osuosl.org/pipermail/darcs-users/2014-November/027056.htmlOur two Summer of Code projects ended up two months ago. Marcio and Ale's code will be part of the upcoming new stable version of darcs. In case you missed them, here are the latest posts of Marcio for his project:http://marcioodiaz.blogspot.com.ar/2014/07/gsoc-progress-report-3-bucketed-global_23.htmlhttp://marcioodiaz.blogspot.com.ar/2014/07/gsoc-progress-report-4-garbage.htmlhttp://marcioodiaz.blogspot.com.ar/2014/07/gsoc-progress-report-5-starting.htmlAle's posts:http://alegdarcs.blogspot.com.ar/2014/07/month-of-june.htmlhttp://alegdarcs.blogspot.com.ar/2014/07/some-week-14-19-july.htmlhttp://alegdarcs.blogspot.com.ar/2014/07/other-week-21-26-july.htmlhttp://alegdarcs.blogspot.com.ar/2014/08/last-few-weeks.htmlIssues resolved (7)issue1514 Guillaume Hoffmannsend --minimize-context flag for sendhttp://bugs.darcs.net/issue1514issue1624 Marcio Diazbucketed cache.http://bugs.darcs.net/issue1624issue2153 Andreas Brandtallow skipping backwards through depended-upon patcheshttp://bugs.darcs.net/issue2153issue2249 Mateusz LenikRename isFile to isValidLocalPath and WorkRepoURL to WorkRepoPossibleURLhttp://bugs.darcs.net/issue2249issue2380 Owen Stephensallow darcs mv into known, but deleted in working, filehttp://bugs.darcs.net/issue2380issue2403 Ganesh Sittampalamneed to avoid moving the rebase patch to the endhttp://bugs.darcs.net/issue2403issue2409 Ganesh Sittampalamimplement darcs rebase applyhttp://bugs.darcs.net/issue2409Patches applied (118)See darcs wiki entry for details.

Installing application dependencies using Stackage, sandboxes, and freezing

Planet Haskell - Sun, 11/23/2014 - 07:00
Installing Haskell packages is still a pain. But I believe the community has some good enough workarounds that puts Haskell on par with a lot of other programming languages. The problem is mostly that the tools and techniques are newer, do not always integrate easily, and are still lacking some automation.My strategy for successful installation:Install through StackageUse a sandbox when you start having complexitiesfreeze (application) dependenciesSimple definitions:Stackage is Stable Hackage: a curated list of packages that are guaranteed to work togetherA sandbox is a project-local package installationFreezing is specifying exact dependency versions.I really hope that Stackage (and sandboxes to a certain extent) are temporary workarounds before we have an amazing installation system such as backpack. But right now, I think this is the best general-purpose solution we have. There are other tools that you can use if you are not on Windows:hsenv (instead of sandboxes)nix (instead of Stackage and sandboxes)hsenv has been a great tool that I have used in the past, but I personally don't think that sandboxing at the shell level with hsenv is the best choice architecturally. I don't want to have a sandbox name on my command line to remind me that it is working correctly, I just want cabal to handle sandboxes automatically.Using StackageSee the Stackage documentation. You just need to change the remote-repo setting in your ~/.cabal/config file.Stackage is a curated list of packages that are guaranteed to work together. Stackage solves dependency hell with exclusive and inclusive package snapshots, but it cannot be used on every project.Stackage offers 2 package lists: exclusive, and inclusive. Exclusive includes only packages vetted by Stackage. Exclusive will always work, even for global installations. This has the nice effect of speeding up installation and keeping your disk usage low, whereas if you default to using sandboxes and you are making minor fixes to libraries you can end up with huge disk usage. However, you may eventually need packages not on Stackage, at which point you will need to use the inclusive snapshot. At some point you will be dealing with conflicts between projects, and then you definitely need to start using sandboxes. The biggest problem with Stackage is that you may need a newer version of a package than what is on the exclusive list. At that point you definitely need to stop using Stackage and start using a sandbox.If you think a project has complex dependencies, which probably includes most applications in a team work setting, you will probably want to start with a sandbox.Sandboxescabal sandbox initA sandbox is a project-local package installation. It solves the problem of installation conflicts with other projects (either actively over-writing each-other or passively sabotaging install problems). However, the biggest problem with sandboxes is that unlike Stackage exclusive, you still have no guarantee that cabal will be able to figure out how to install your dependencies.sandboxes are mostly orthogonal to Stackage. If you can use Stackage exclusive, you should, and if you never did a cabal update, you would have no need for a sandbox with Stackage exclusive. When I am making minor library patches, I try to just use my global package database with Stackage to avoid bloating disk usage from redundant installs.So even with Stackage we are going to end up wanting to create sandboxes. But we would still like to use Stackage in our sandbox: this will give us the highest probability of a successful install. Unfortunately, Stackage (remote-repo) integration does not work for a sandbox.The good news is that there is a patch for Cabal that has already been merged (but not yet released). Even better news is that you can use Stackage with a sandbox today! Cabal recognizes a cabal.config file which specifies a list of constraints that must be met, and we can set that to use Stackage.cabal sandbox init curl http://www.stackage.org/alias/fpcomplete/unstable-ghc78-exclusive/cabal.config > cabal.config cabal install --only-depFreezingThere is a problem with our wonderful setup: what happens when our package is installed on another location? If we are developing a library, we need to figure out how to make it work everywhere, so this is not as much of an issue.Application builders on the other hand need to produce reliable, re-producible builds to guarantee correct application behavior. Haskellers have attempted to do this in the .cabal file by pegging versions. But .cabal file versioning is meant for library authors to specify maximum version ranges that a library author hopes will work with their package. Pegging packages to specific versions in a .cabal file will eventually fail because there are dependencies of dependencies that are not listed in the .cabal file and thus not pegged. The previous section's usage of a cabal.config has a similar issue since only packages from Stackage are pegged, but Hackage packages are not.The solution to this is to freeze your dependencies:cabal freezeThis writes out a new cabal.config (overwriting any existing cabal.config). Checking in this cabal.config file guarantees that everyone on your team will be able to reproduce the exact same build of Haskell dependencies. That gets us into upgrade issues that will be discussed.It is also worth noting that there is still a rare situation in which freezing won't work properly because packages can be edited on Hackage.Installation workflowLets go over an installation workflow:cabal sandbox init curl http://www.stackage.org/alias/fpcomplete/unstable-ghc78-exclusive/cabal.config > cabal.config cabal install --only-depAn application developer will then want to freeze their dependencies.cabal freeze git add cabal.config git commit cabal.configUpgrading packagescabal-install should provide us with a cabal upgrade [PACKAGE-VERSION] command. That would perform an upgrade of the package to the version specified, but also perform a conservative upgrade of any transitive dependencies of that package. Unfortunately, we have to do upgrades manually.One option for upgrading is to just wipe out your cabal.config and do a fresh re-install.rm cabal.config rm -r .cabal-sandbox cabal sandbox init curl http://www.stackage.org/alias/fpcomplete/unstable-ghc78-exclusive/cabal.config > cabal.config cabal update cabal install --only-dep cabal freezeWith this approach all your dependencies can change so you need to re-test your entire application. So to make this more efficient you are probably going to want to think about upgrading more dependencies than what you originally had in mind to avoid doing this process again a week from now.The other extreme is to become the solver. Manually tinker with the cabal.config until you figure out the upgrade plan that cabal install --only-dep will accept. In between, you can attempt to leverage the fact that cabal already tries to perform conservative upgrades once you have packages installed.rm cabal.config curl http://www.stackage.org/alias/fpcomplete/unstable-ghc78-exclusive/cabal.config > cabal.config cabal update cabal install --only-dep --force-reinstalls cabal freezeYou can make a first attempt without the --force-reinstalls flag, but the flag is likely to be necessary.If you can no longer use Stackage because you need newer versions of the exclusive packages, then your workflow will be the same as above without the curl step. But you will have a greater desire to manually tinker with the cabal.config file. This process usually consists mostly of deleting constraints or changing them to be a lower bound.ConclusionUpgrading packages is still a horrible experience.However, for a fresh install, using Stackage, sandboxes, and freezing works amazingly well. Of course, once you are unable to use Stackage because you need different exclusive versions you will encounter installation troubles. But if you originally started based off of Stackage and try to perform conservative upgrades, you may still find your situation easier to navigate because you have already greatly reduced the search space for cabal. And if you are freezing versions and checking in the cabal.config, the great thing is that you can experiment with installing new dependencies but can always revert back to the last known working dependencies.Using these techniques I am able to get cabal to reliably install complex dependency trees with very few issues and to get consistent application builds.

Regular Haskelling. How?

Planet Haskell - Sun, 11/23/2014 - 07:00
Ever since ICFP 2014 I’ve had as a goal to get into the habit of coding in Haskell. It’s been the language I enjoy most for a few years now, but being surrounded by and talking to so many brilliant developers as I did during that week really drove home that I will only have more fun the more code I write. My goal was not very ambitious; just write something in Haskell most days every week. So far I’ve managed to keep it up. These are a few tricks I’ve used and they’ve worked well for me so far. Just write, no matter what, just write In ninth grade a rather successful Swedish author visited my school and what I remember most from that is one thing he said: Just read! It doesn’t matter what. It doesn’t matter if what you read isn’t considered good literature; read Harlequin books if that’s what you like, read magazines, read comics. Just read! I think the same holds for writing code; it’s only with practice that one gets comfortable expressing oneself in a particular language. Fix warts I can’t actually think of any piece of code I’ve written that doesn’t have some warts. It may be in the form of missing features, or quirks (bugs) in the implementation that forces the user to regularly work in a less-than-optimal way. I’ve found fixing warts in tools and libraries I use myself to be one of the most rewarding tasks to take on; the feedback is so immediate that every fix cause a boost in motivation to fix the next one. Exercise sites Sometimes it’s simply difficult to find the motivation to tackle working on an existing project, and inspiration for starting something new might be lacking too. This happens to me regularly, and I used to simply close the lid on the computer earlier, but now I try to find some exercises to do instead. There are several sources of exercises. I know Project Euler is rather popular among new Haskellers, but there are others. CodeEval is a site with problems in three different levels. It may be extra interesting for people in the US since some of the problems are sponsored by companies which seem to use the site as a place for recruiting. So far I’ve only seen American companies do that, but I suppose it might catch on in other parts of the world too. Haskell is one of several languages supported. Exercism is both a site and a tool. The goal is to facilitate learning of languages. On first use the tool will download the first exercise, and after completion one uses it to upload the solution to the site. Once uploaded the solution is visible to other users, and they are allowed to “nitpick” (comment). After uploading a solution to one exercise the next exercise in the series becomes available. It supports a rather long list programming languages. I like both of these, but I’ve spent more time on the latter one. Personally I find the idea behind Exercism very appealing and I’ve been recommending it to a couple of co-workers already. Feel free to put links to other sources of exercises in the comments. Simplify old code With more practice comes more and more insights into what functions are available and how to string them together. When I don’t even feel like doing a full exercise on Exercism I just dig out something that smells a little and clean it up. Anything is fair game, no matter how tiny. Just take a look at my implementation of reThrowError. What else? I’d love to hear tips and tricks from other people who aren’t lucky enough to have a day job where they get to write Haskell. How do you keep up the learning and practice? 2014-11-16T00:00:00Z 2014-11-16T00:00:00Z

Tomatoes are a subtype of vegetables

Planet Haskell - Sun, 11/23/2014 - 07:00
Subtyping is one of those concepts that seems to makes sense when you first learn it (“Sure, convertibles are a subtype of vehicles, because all convertibles are vehicles but not all vehicles are convertibles”) but can quickly become confusing when function types are thrown into the mix. For example, if a is a subtype of […]

HsQML 0.3.2.0 released: Enters the Third Dimension

Planet Haskell - Sun, 11/23/2014 - 07:00
Last night I released HsQML 0.3.2.0, the latest edition of my Haskell binding to the Qt Quick GUI library. As usual, it's available for download from Hackage.HsQML allows you to bind declarative user interfaces written in QML against a Haskell back-end, but sometimes you can't just let QML hog all the graphical fun to itself. This latest release allows you incorporate 3D (OpenGL) graphics rendered from Haskell into your QML scenes using the new Canvas module.The screenshot below shows off the OpenGL demo in the samples package. The colourful triangle is rendered using the regular Haskell Platform's OpenGL bindings, but HsQML sets up the environment so that it renders into a special HaskellCanvas element inside the QML scene. If you run the actual program you can see it being animated too, moving around and changing colour.This release also adds the Objects.Weak module which allows you to hold weak references to QML objects and keep track of their life cycles using finalisers. The new FactoryPool abstraction uses these primitives to help you efficiently keep track of instances you've created, especially for when you need to fire change signals on them for data-binding.London Haskell User GroupI've been fortunate enough to get a speaking slot at the London Haskell User Group and will be giving a talk on Building Pragmatic User Interfaces in Haskell with HsQML on the 26th of November. Please feel free to come along and watch. You can RSPV on the group's meet-up page.The talk should be videoed and materials will be available online afterwards.release-0.3.2.0 - 2014.11.13 * Added OpenGL canvas support. * Added weak references and object finalisers. * Added FactoryPool abstraction. * Added To-only custom marshallers. * Added Ignored type. * Relaxed Cabal dependency constraint on 'text'.
Syndicate content