Functional Programming

PolyConf 2015

Planet Haskell - 17 hours 59 min ago
Last year I spoke at PolyConf about client/server Haskell webapps, and especially on GHCJS. This year's PolyConf has just opened up a Call for Proposals.I really enjoyed my time at the conference last year: it's a great place to meet and interact with people doing cool things in other languages, and can be a great source of inspiration for ways we can do things better. Also, as I've mentioned recently, I think it's a great thing for us Haskellers to put out content accessible by the non-Haskell world, and a conference like this is a perfect way to do so.If you have a topic that you think polyglot programmers would be interested in, please take the time to submit a proposal for the conference.

A Type Safe Reverse or Some Hasochism

Planet Haskell - 17 hours 59 min ago
Conor McBride was not joking when he and his co-author entitled their paper about dependent typing in Haskell “Hasochism”: Lindley and McBride (2013). In trying to resurrect the Haskell package yarr, it seemed that a dependently typed reverse function needed to be written. Writing such a function turns out to be far from straightforward. How […]

Ok, Internet, I heard you. I need a new name for the JS type inferer/checker.

Planet Haskell - 17 hours 59 min ago
I knew a new name is needed for this JS type inferer/checker. Many people have pointed out that sweet.js uses the same short name (sjs). I could just stick with “Safe JS” but not make it short. Or I could pick a new name, to make it less boring. Help me help you! Please take […]

My Haskell tooling wishlist

Planet Haskell - 17 hours 59 min ago
I spend a lot of my time on Haskell tooling, both for my hobbies and my job. Almost every project I work on sparks a desire for another piece of tooling. Much of the time, I’ll follow that wish and take a detour to implement that thing (Fay, structured-haskell-mode, hindent, are some Haskell-specific examples). But in the end it means less time working on the actual domain problem I’m interested in, so a while ago I intentionally placed a quota on the amount of time I can spend on this. So this page will contain a list of things I’d work on if I had infinite spare time, and that I wish someone else would make. I’ll update it from time to time as ideas come to the fore. These projects are non-trivial but are do-able by one person who has enough free time and motivation. There is a common theme among the projects listed, which is that they are things that Haskell among most other well known languages is particularly well suited for and yet we don’t have such tooling as standard tools in the Haskell tool box. They should be! An equational reasoning assistant Equational reasoning lets you prove properties about your functions by following a simple substitution model to state that one term is equal to another. The approach I typically take is to expand and reduce until both sides of the equation are the same. Here is an example. I have a data type, Consumer. Here is an instance of Functor: instance Functor (Consumer s d) where fmap f (Consumer d p) = Consumer d (\s -> case p s of (Failed e,s') -> (Failed e,s') (Continued e,s') -> (Continued e,s') (Succeeded a,s') -> (Succeeded (f a),s')) I want to prove that it is a law-abiding instance of Functor, which means proving that fmap id ≡ id. You don’t need to know anything about the Consumer type itself, just this implementation. Here are some very mechanical steps one can take to prove this: id ≡ fmap id ≡ \(Consumer d p) -> Consumer d (\s -> case p s of (Failed e,s') -> (Failed e,s') (Continued e,s') -> (Continued e,s') (Succeeded a,s') -> (Succeeded (id a),s')) ≡ \(Consumer d p) -> Consumer d (\s -> case p s of (Failed e,s') -> (Failed e,s') (Continued e,s') -> (Continued e,s') (Succeeded a,s') -> (Succeeded a,s')) ≡ \(Consumer d p) -> Consumer d (\s -> p s) ≡ \(Consumer d p) -> Consumer d p ≡ id So that’s: Expand the fmap id into the instance’s implementation. Reduce by applying the property that id x ≡ x. Reason that if every branch of a case returns the original value of the case, then that whole case is an identity and can be dropped. Eta-reduce. Again, pattern-matching lambdas are just syntactic sugar for cases, so by the same rule this can be considered identity. End up with what we wanted to prove: fmap id ≡ id These are pretty mechanical steps. They’re also pretty laborious and error-prone. Of course, if you look at the first step, it’s pretty obvious the whole thing is an identity, but writing the steps out provides transformations that can be statically checked by a program. So it’s a good example, because it’s easily understandable and you can imagine proving something more complex would require a lot more steps and a lot more substitutions. Proof of identity for Applicative has substantially more steps, but is equally mechanical. Wouldn’t it be nice if there was a tool which given some expression would do the following? Suggest a list of in-place expansions. Suggest a list of reductions based on a set of pre-defined rules (or axioms). Then I could easily provide an interactive interface for this from Emacs. In order to do expansion, you need the original source of the function name you want to expand. So in the case of id, that’s why I suggested stating an axiom (id a ≡ a) for this. Similarly, I could state the identity law for Monoids by saying mappend mempty a ≡ a, mappend a mempty ≡ a. I don’t necessarily need to expand the source of all functions. Usually just the ones I’m interested in. Given such a system, for my example above, the program could actually perform all those steps automatically and spit out the steps so that I can read them if I choose, or otherwise accept that the proof was derived sensibly. In fact, suppose I have my implementation again, and I state what must be satisfied by the equational process (and, perhaps, some axioms that might be helpful for doing it, but in this case our axioms are pretty standard), I might write it like this: instance Functor (Consumer s d) where fmap f (Consumer d p) = ... proof [|fmap id ≡ id :: Consumer s d a|] This template-haskell macro proof would run the steps above and if the equivalence is satisfied, the program compiles. If not, it generates a compile error, showing the steps it performed and where it got stuck. TH has limitations, so it might require writing it another way. Such a helpful tool would also encourage people (even newbies) to do more equational reasoning, which Haskell is often claimed to be good at but you don’t often see it in evidence in codebases. In practice isn’t a standard thing. Promising work in this area: Introducing the Haskell Equational Reasoning Assistant – works pretty much how I described above. I don’t know where the source is, I’ve emailed the author about it. Will update with any results. Catch for GHC Ideally, we would never have inexhaustive patterns in Haskell. But a combination of an insufficient type system and people’s insistence on using partial functions leads to a library ecosystem full of potential landmines. Catch is a project by Neil Mitchell which considers how a function is called when determining whether its patterns are exhaustive or not. This lets us use things like head and actually have a formal proof that our use is correct, or a formal proof that our use, or someone else’s use, will possibly crash. map head . group This is an example which is always correct, because group returns a list of non-empty lists. Unfortunately, it currently works for a defunct Haskell compiler, but apparently it can be ported to GHC Core with some work. I would very much like for someone to do that. This is yet another project which is the kind of thing people claim is possible thanks to Haskell’s unique properties, but in practice it isn’t a standard thing, in the way that QuickCheck is. A substitution stepper This is semi-related, but different, to the proof assistant. I would like a program which can accept a Haskell module of source code and an expression to evaluate in the context of that module and output the same expression, as valid source code, with a single evaluation step performed. This would be fantastic for writing new algorithms, for understanding existing functions and algorithms, writing proofs, and learning Haskell. There was something like this demonstrated in Inventing on Principle. The opportunities for education and general development practice are worth such a project. Note: A debugger stepper is not the same thing. Example: foldr (+) 0 [1, 2, 3, 4] foldr (+) 0 (1 : [2, 3, 4]) 1 + foldr (+) 0 [2, 3, 4] 1 + foldr (+) 0 (2 : [3, 4]) 1 + (2 + foldr (+) 0 [3, 4]) 1 + (2 + foldr (+) 0 (3 : [4])) 1 + (2 + (3 + foldr (+) 0 [4])) 1 + (2 + (3 + foldr (+) 0 (4 : []))) 1 + (2 + (3 + (4 + foldr (+) 0 []))) 1 + (2 + (3 + (4 + 0))) 1 + (2 + (3 + 4)) 1 + (2 + 7) 1 + 9 10 Comparing this with foldl immediately shows the viewer how they differ in structure: foldl (+) 0 [1, 2, 3, 4] foldl (+) 0 (1 : [2, 3, 4]) foldl (+) ((+) 0 1) [2, 3, 4] foldl (+) ((+) 0 1) (2 : [3, 4]) foldl (+) ((+) ((+) 0 1) 2) [3, 4] foldl (+) ((+) ((+) 0 1) 2) (3 : [4]) foldl (+) ((+) ((+) ((+) 0 1) 2) 3) [4] foldl (+) ((+) ((+) ((+) 0 1) 2) 3) (4 : []) foldl (+) ((+) ((+) ((+) ((+) 0 1) 2) 3) 4) [] (+) ((+) ((+) ((+) 0 1) 2) 3) 4 1 + 2 + 3 + 4 3 + 3 + 4 6 + 4 10 Each step in this is a valid Haskell program, and it’s just simple substitution. If the source for a function isn’t available, there are a couple options for what to do: Have special-cases for things like (+), as above. Just perform no substitution for that function, it will still be a legitimate program. It’s another project I could easily provide see-as-you-type support for in Emacs, given an engine to query. Again, this is just one more project which should just be a standard thing Haskell can do. It’s a pure language. It’s used to teach equational reasoning and following a simple lambda calculus substitution model. But there is no such tool. Haskell is practically waving in our faces with this opportunity. Existing work in this area: stepeval - a prototype which nicely demonstrates the idea. It’s based on HSE and only supports a tiny subset. There aren’t any plans to move this forward at the moment. I’ll update the page if this changes. 2015-01-24T00:00:00Z

Writing a low level graph database

Planet Haskell - 17 hours 59 min ago
I've been interested in graph databases for a long time, and I've developed several applications that offer an API close enough to a graph API, but with relational storage. I've also played with off the shelf graph databases, but I thought it would be fun to try an implement my own, in Haskell of course.I found that in general literature is quite succinct on how database products manage their physical storage, so I've used some of the ideas behind the Neo4J database, as explained in the Graph Databases books and in a few slideshows online.So I've written the start of a very low level graph database, writing directly on disk via Handles and some Binary instances. I try to use fixed length record so that their IDs translate easily into offsets in the file. Mostly everything ends up looking like linked lists on disk: vertices have a pointer to their first property and their first edge, and in turn these have pointers to the next property or edge. Vertex have pointers to edges linking to and from them.I've also had some fun trying to implement an index trie on disk.All in all, it's quite fun, even though I realize my implementations are quite naive, and I just hope that the OS disk caching is enough to make performance acceptable. I've written a small benchmark using the Hackage graph of packages as sample data, but I would need to write the same with a relational backend.If anybody is interested in looking at the code or even participate, everything is of course on Github!

Commercial Haskell Special Interest Group

Planet Haskell - 17 hours 59 min ago
At FP Complete, we’re constantly striving to improve the quality of the Haskell ecosystem, with a strong emphasis on making Haskell a viable tool for commercial users. Over the past few years we’ve spoken with many companies either currently using Haskell or considering doing so, worked with a number of customers in making Haskell a reality for their software projects, and released tooling and libraries to the community.We’re also aware that we’re not the only company trying to make Haskell a success, and that others are working on similar projects to our own. We believe that there’s quite a lot of room to collaborate on identifying problems, discussing options, and creating solutions.Together with a few other companies and individuals, we are happy to announce the launch of a Commercial Haskell Special Interest Group.If you're interested in using Haskell in a commercial context, please join the mailing list. I know that we have some projects we think are worth immediate collaboration, and we'll kick off discussions on those after people have time to join the mailing list. And I'm sure many others have ideas too. I look forward to hearing them!

Combining inputs in conduit

Planet Haskell - 17 hours 59 min ago
The code the post on using my simplistic state machine together with conduit will happily wait forever for input, and the only way to terminate it is pressing Ctrl-C. In this series I’m writing a simple adder machine, but my actual use case for these ideas are protocols for communication (see the first post), and waiting forever isn’t such a good thing; I want my machine to time out if input doesn’t arrive in a timely fashion. I can think of a few ways to achieve this: Use a watchdog thread that is signalled from the main thread, e.g. by a Conduit that sends a signal as each Char passes through it. As each Char is read kick off a “timeout thread” which throws an exception back to the main thread, unless the main thread kills it before the timeout expires. Run a thread that creates ticks that then can be combined with the Chars read and fed into the state machine itself. Since this is all about state machines I’ve opted for option 3. Furthermore, since I’ve recently finished reading the excellent Parallel and Concurrent Programming in Haskell I decided to attempt writing the conduit code myself instead of using something like stm-conduit. The idea is to write two functions: one Source to combine two Sources, and one Sink that writes its input into a TMVar. The latter of the two is the easiest one. Given a TMVar it just awaits input, stores it in the TMVar and then calls itself: sinkTMVar :: MonadIO m => TMVar a -> Sink a m () sinkTMVar tmv = forever $ do v <- await case v of Nothing -> return () Just v' -> liftIO (atomically $ putTMVar tmv v') The other one is only slightly more involved: whyTMVar :: MonadIO m => Source (ResourceT IO) a -> Source (ResourceT IO) a -> Source m a whyTMVar src1 src2 = do t1 <- liftIO newEmptyTMVarIO t2 <- liftIO newEmptyTMVarIO void $ liftIO $ async $ fstProc t1 void $ liftIO $ async $ sndProc t2 forever $ liftIO (atomically $ takeTMVar t1 `orElse` takeTMVar t2) >>= C.yield where fstProc t = runResourceT $ src1 $$ sinkTMVar t sndProc t = runResourceT $ src2 $$ sinkTMVar t Rather short and sweet I think. However, there are a few things that I’m not completely sure of yet. forkIO vs. async vs. resourceForkIO There is a choice between at least three functions when it comes to creating the threads and I haven’t looked into which one is better to use. AFAIU there may be issues around exception handling and with resources. For now I’ve gone with async for no particular reason at all. Using TMVar In this example the input arrives rather slowly, which means having room for a single piece at a time is enough. If the use case changes and input arrives quicker then this decision has to be revisited. I’d probably choose to use stm-conduit in that case since it uses TMChan internally. Combining only two Sources Again, this use case doesn’t call for more than two Sources, at least at this point. If the need arises for using more Sources I’ll switch to stm-conduit since it already defines a function to combine a list of Sources. The next step will be to modify the conduit process and the state machine. 2015-01-22T00:00:00Z 2015-01-22T00:00:00Z

A Visual Introduction to DSP for SDR — now live in your browser!

Planet Haskell - 17 hours 59 min ago
My interactive presentation on digital signal processing (previous post with video) is now available on the web, at visual-dsp.switchb.org! More details, source code, etc. at the site. (P.S. I'll also be at the next meetup, which is tomorrow, January 21, but I don’t have another talk planned. (Why yes, I did procrastinate getting this site set up until a convenient semi-deadline.)) 2015-01-21T04:51:08Z Kevin Reid (kpreid) kpreid@switchb.org

HsQML 0.3.3.0 released: Control those Contexts

Planet Haskell - 17 hours 59 min ago
Happy New Year! Another year and another new release of HsQML is out, the Haskell binding to the Qt Quick framework that's kind to your skin. As usual, it's available for download from Hackage and immediate use adding a graphical user-interface to your favourite Haskell program.The major new feature in this release is the addition of the OpenGLContextControl QML item to the HsQML.Canvas module. Previously, the OpenGL canvas support introduced in 0.3.2.0 left programs at the mercy of Qt to configure the context on their behalf and there was no way to influence this process. That was a problem if you want to use the latest OpenGL features because they require you to obtain a newfangled Core profile context whereas Qt appears to default to the Compatibility profile (or just plain OpenGL 2.x if that's all you have).To use it, simply place an OpenGLContextControl item in your QML document inside the window you want to control and set the properties to the desired values. For example, the following snippet of code would request the system provide it with a context supporting at least the OpenGL 4.1 Core profile:import HsQML.Canvas 1.0...OpenGLContextControl {    majorVersion: 4;     minorVersion: 1;     contextType: OpenGLContextControl.OpenGL;     contextProfile: OpenGLContextControl.CoreProfile; }The supported properties are all detailed in the Haddock documentation for the Canvas module. There's also a more sophisticated example in the corresponding new release of the hsqml-demo-samples package. This example, hsqml-opengl2, displays the current context settings and allows you to experiment with requesting different values.This graphics chip-set has seen better days.Also new in this release, i) the defSignalNamedParams function allows you to give names to your signal parameters and ii) the EngineConfig record has been extended to allow setting additional search paths for QML modules and native plugins..The first point is an interesting one because, harking back, my old blog post on the Connections item, doesn't actually demonstrate passing parameters to the signal handler and that's because you couldn't ordinarily. You could connect a function to the signal manually using the connect() method in QML code and access arguments positionally that way, or written the handler to index into the arguments array for it's parameters if you were willing to stoop that low. Now, you can give the parameters names and they will automatically be available in the handler's scope.Finally, the Template Haskell shims inside Setup.hs have been extended to support the latest version of the Cabal API shipping with version 1.22. The Template-free SetupNoTH.hs remains supporting 1.18 ≤ n < 1.22 will continue to do so at least until Debian upgrades their Cabal package. Setup.hs will now try to set QT_SELECT if you're running a recent enough version of GHC to support setting environment variables and this can prevent some problems with qtchooser(1).release-0.3.3.0 - 2015.01.20  * Added support for Cabal 1.22 API.  * Added facility for controlling the OpenGL context.  * Added defSignal variant with ability to set parameter names.  * Added option for setting the module and plugin search paths.  * Changed Setup script to set QT_SELECT (base >= 4.7).  * Fixed crash resizing canvas in Inline mode.  * Fixed leaking stable pointers when objects are collected.  * Fixed Canvas delegate marshaller to fail on invalid values.  * Fixed discrepancy between kinds of type conversion.

Democracy vs the 1%

Planet Haskell - 17 hours 59 min ago
To celebrate the 750th anniversary of the first meeting of the British parliament, the BBC Today programme sponsored a special edition of The Public Philosopher, asking the question Why Democracy? The programme spent much time wondering why folk felt disenfranchised but spent barely two minutes on the question of how wealth distorts politics. (Three cheers to Shirley Williams for raising the issue.) An odd contrast, if you compare it to yesterday's story that the wealthiest 1% now own as much as the other 99% combined; or to Lawrence Lessig's Mayday campaign to stop politicians slanting their votes to what will help fund their reelection; or to Thomas Picketty's analysis of why the wealthy inevitably get wealthier. (tl;dr: "Piketty's thesis has been shorthanded as r > g: that the rate of return on capital today -- and through most of history -- has been higher than general economic growth. This means that simply having money is the best way to get more money.")

Haskell Design Patterns: .Extended Modules

Planet Haskell - 17 hours 59 min ago
Introduction For a long time, I have wanted to write a series of blogposts about Design Patterns in Haskell. This never really worked out. It is hard to write about Design Patterns. First off, I have been writing Haskell for a long time, so mostly things feel natural and I do not really think about code in terms of Design Patterns. Additionaly, I think there is a very, very thin line between what we call “Design Patterns” and what we call “Common Sense”. Too much on one side of the line, and you sound like a complete idiot. Too much on the other side of the line, and you sound like a pretentious fool who needs five UML diagrams in order to write a 100-line program. However, in the last year, I have both been teaching more Haskell, and I have been reading even more code written by other people. The former made me think harder about why I do things, and the latter made me notice patterns I hadn’t thought of before, in particular if they were formulated in another way. This has given me a better insight into these patterns, so I hope to write a couple of blogposts like this over the next couple of months. We will see how it goes – I am not exactly a prolific blogger. The first blogpost deals with what I call .Extended Modules. While the general idea has probably been around for a while, the credit for this specific scheme goes to Bas van Dijk, Simon Meier, and Thomas Schilling. .Extended Modules: the problem This problem mainly resolves around organisation of code. Haskell allows for building complex applications out of small functions that compose well. Naturally, if you are building a large application, you end up with a lot of these small functions. Imagine we are building some web application, and we have a small function that takes a value and then sends it to the browser as JSON: json :: (MonadSnap m, Aeson.ToJSON a) => a -> m () json x = do modifyResponse $ setContentType "application/json" writeLBS $ Aeson.encode x The question is: where do we put this function? In small projects, these seem to inevitably end up inside the well-known Utils module. In larger, or more well-organised projects, it might end up in Foo.Web or Foo.Web.Utils. However, if we think outside of the box, and disregard dependency problems and libraries including every possible utility function one can write, it is clearer where this function should go: in Snap.Core. Putting it in Snap.Core is obviously not a solution – imagine the trouble library maintainers would have to deal with in order to include all these utility functions. The basic scheme The scheme we use to solve this is simple yet powerful: in our own application’s non-exposed modules list, we add Snap.Core.Extended. src/Snap/Core/Extended.hs: {-# LANGUAGE OverloadedStrings #-} module Snap.Core.Extended ( module Snap.Core , json ) where import qualified Data.Aeson as Aeson import Snap.Core json :: (MonadSnap m, Aeson.ToJSON a) => a -> m () json x = do modifyResponse $ setContentType "application/json" writeLBS $ Aeson.encode x The important thing to notice here is the re-export of module Snap.Core. This means that, everywhere in our application, we can use import Snap.Core.Extended as a drop-in replacement for import Snap.Core. This also makes sharing code in a team easier. For example, say that you are looking for a catMaybes for Data.Vector. Before, I would have considered either defining this in a where clause, or locally as a non-exported function. This works for single-person projects, but not when different people are working on different modules: you end up with five implementations of this method, scattered throughout the codebase. With this scheme, however, it’s clear where to look for such a method: in Data.Vector.Extended. If it’s not there, you add it. Aside from utility functions, this scheme also works great for orphan instances. For example, if we want to serialize a HashMap k v by converting it to [(k, v)], we can add a Data.HashMap.Strict.Extended module. src/Data/HashMap/Strict/Extended.hs: {-# OPTIONS_GHC -fno-warn-orphans #-} module Data.HashMap.Strict.Extended ( module Data.HashMap.Strict ) where import Data.Binary (Binary (..)) import Data.Hashable (Hashable) import Data.HashMap.Strict instance (Binary k, Binary v, Eq k, Hashable k) => Binary (HashMap k v) where put = put . toList get = fmap fromList get A special case of these .Extended modules is Prelude.Extended. Since you will typically import Prelude.Extended into almost all modules in your application, it is a great way to add a bunch of (very) common imports from base, so import noise is reduced. This is, of course, quite subjective. Some might want to add a few specific functions to Prelude (as illustrated below), and others might prefer to add all of Control.Applicative, Data.List, Data.Maybe, and so on. src/Prelude/Extended.hs: module Prelude.Extended ( module Prelude , foldl' , fromMaybe ) where import Data.List (foldl') import Data.Maybe (fromMaybe) import Prelude Scaling up The basic scheme breaks once our application consists of several cabal packages. If we have a package acmecorp-web, which depends on acmecorp-core, we would have to expose Data.HashMap.Strict.Extended from acmecorp-core, which feels weird. A simple solution is to create an unordered-containers-extended package (which is not uploaded to the public Hackage for obvious reasons). Then, you can export Data.HashMap.Strict.Extended from there. This solution creates quite a lot of overhead. Having many modules is fine, since they are easy to manage – they are just files after all. Managing many packages, however, is harder: every package introduces a significant amount of overhead: for example, repos need to be maintained, and dependencies need to be managed explicitly in the cabal file. An alternative solution is to simply put all of these modules together in a hackage-extended package. This solves the maintenance overhead and still gives you a very clean module hierarchy. Conclusion After using this scheme for over year in a large, constantly evolving Haskell application, it is clear to me that this is a great way to organise and share code in a team. A side-effect of this scheme is that it becomes very convenient to consider some utility functions from these .Extended modules for inclusion in their respective libraries, since they all live in the same place. If they do get added, just remove the originals from hackage-extended, and the rest of your code doesn’t even break! Thanks to Alex Sayers for proofreading! 2015-01-20T00:00:00Z

Back in action

Planet Haskell - 17 hours 59 min ago
I don't think I ever mentioned it here but, last semester I took a much-needed sabbatical. The main thing was to take a break from all the pressures of work and grad school and get back into a healthy headspace. Along the way I ended up pretty much dropping off the internet entirely. So if you've been missing me from various mailing lists and online communities, that's why. I'm back now. If you've tried getting in touch by email, irc, etc, and don't hear from me in the next few weeks, feel free ping me again. This semester I'm teaching foundations of programming language theory with Jeremy Siek, and work on the dissertation continues apace. Over the break I had a few breakthrough moments thinking about the type system for chiastic lambda-calculi— which should help to clean up the normal forms for terms, as well as making room for extending the theory to include eta-conversion. Once the dust settles a bit I'll write some posts about it, as well as continuing the unification-fd tutorial I started last month. comments 2015-01-20T05:39:59Z

Introducing SJS, a type inferer and checker for JavaScript (written in Haskell)

Planet Haskell - 17 hours 59 min ago
TL;DR: SJS is a type inference and checker for JavaScript, in early development. The core inference engine is working, but various features and support for the full browser JS environment and libraries are in the works. SJS (Haskell source on github) is an ongoing effort to produce a practical tool for statically verifying JavaScript code. […]

GHC Weekly News - 2015/01/19

Planet Haskell - 17 hours 59 min ago
Hi *, It's time for some more GHC news! The GHC 7.10 release is closing in, which has been the primary place we're focusing our attention. In particular, we're hoping RC2 will be Real Soon Now. Some notes from the past GHC HQ meetings this week: GHC 7.10 is still rolling along smoothly, and it's expected that RC2 will be cut this Friday, January 23rd. Austin sent out an email about this to ghc-devs, so we can hopefully get all the necessary fixes in. Our status page for GHC 7.10 lists all the current bullet points and tickets we hope to address: ​https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.10.1 Currently, GHC HQ isn't planning on focusing many cycles on any GHC 7.10 tickets that aren't highest priority. We're otherwise going to fix things as we see fit, at our leisure - but a highest priority bug is a showstopper for us. This means if you have something you consider a showstopper for the next release, you should bump the priority on the ticket and yell at us! We otherwise think everything looks pretty smooth for 7.10.1 RC2 - our libraries are updated, and most of the currently queued patches (with a few minor exceptions) are done and merged. Some notes from the mailing list include: Austin announced the GHC 7.10.1 RC2 cutoff, which will be on Friday the 23rd. ​https://www.haskell.org/pipermail/ghc-devs/2015-January/008026.html Austin has alerted everyone that soon, Phabricator will run all builds with ./validate --slow, which will increase the time taken for most builds, but will catch a wider array of bugs in commits and submitted patches - there are many cases the default ./validate script still doesn't catch. ​https://www.haskell.org/pipermail/ghc-devs/2015-January/008030.html Johan Tibell asked about some clarifications for the HsBang datatype inside GHC. In response, Simon came back with some clarifications, comments, and refactorings, which greatly helped Johan. ttps://www.haskell.org/pipermail/ghc-devs/2015-January/007905.html Jens Petersen announced a Fedora Copr repo for GHC 7.8.4: ​https://www.haskell.org/pipermail/ghc-devs/2015-January/007978.html Richard Eisenberg had a question about the vectoriser: can we disable it? DPH seems to have stagnated a bit recently, bringing into question the necessity of keeping it on. There hasn't been anything done yet, but it looks like the build will get lighter, with a few more modules soon: ​https://www.haskell.org/pipermail/ghc-devs/2015-January/007986.html Ben Gamari has an interesting email about trying to optimize bytestring, but he hit a snag with small literals being floated out causing very poor assembly results. Hopefully Simon (or anyone!) can follow up soon with some help: ​https://www.haskell.org/pipermail/ghc-devs/2015-January/007997.html Konrad Gądek asks: why does it seem the GHC API is slower at calling native code than a compiled executable is? Konrad asks as this issue of performance is particularly important for their work. ​https://www.haskell.org/pipermail/ghc-devs/2015-January/007990.html Jan Stolarek has a simple question: what English spelling do we aim for in GHC? It seems that while GHC supports an assortment of British and American english syntactic literals (e.g. SPECIALIZE and SPECIALISE), the compiler sports an assortment of British/American identifiers on its own! ​https://www.haskell.org/pipermail/ghc-devs/2015-January/007999.html Luis Gabriel has a question about modifying the compiler's profiling output, particularly adding a new CCS (Cost Centre Structure) field. He's hit a bug it seems, and is looking for help with his patch. ​https://www.haskell.org/pipermail/ghc-devs/2015-January/008015.html Closed tickets the past few weeks include: #9966, #9904, #9969, #9972, #9934, #9967, #9875, #9900, #9973, #9890, #5821, #9984, #9997, #9998, #9971, #10000, #10002, #9243, #9889, #9384, #8624, #9922, #9878, #9999, #9957, #7298, and #9836. 2015-01-19T21:35:35Z thoughtpolice

My email is a monster

Planet Haskell - 17 hours 59 min ago
My New Year's resolution is to look at my e-mail at most once a day. If you need a response in less than a day or two, please arrange it with me in advance or use a different medium. Cartoon courtesy of Oatmeal.

Free speech values

Planet Haskell - 17 hours 59 min ago
In the aftermath of the attacks on Charlie Hebdo in Paris, there has been some high quality thinking and writing. There's also been some really stupid things said, from the usual protagonists. It's an interesting facilitation that the internet now provides: as I no longer watch any news on TV (in fact I don't watch any news at all), nor subscribe to any newspaper, I'm used to reading articles from a wide range of sources. Equally, it's much easier for me to avoid opinion I disagree (right-wing press) with or trivialised dumbed-down reporting (e.g. BBC news). Because of this ease of reading what you want to (in both the good and bad sense), I thought a lot of the reaction was measured and sensible. Turns out I was just unaware of most of the reaction going on. Anyway there seems to be virtually nothing left to say on this, so this post is really little more than a bunch of links to pieces I thought were thoughtful and well written. I am an agnostic. I personally don't believe in any religion but I accept I can't prove that every religion is false and so I may be wrong. I tend to treat the beliefs of any religion as arbitrary and abstract ideas. Thus one place to start is the acknowledgement that the laws of any country or civilisation are as arbitrary and ad-hoc as the rules or teachings of any religion. They are just things that people choose to believe in, or follow, or not violate. In the UK, some of our law is based in Christianity (e.g. thou shalt not murder - though I've no idea whether ideas like that actually predate Christianity; I wouldn't be surprised if they do) though other parts of Christianity are not encoded in law (adultery is not illegal, for example). Many have careless labelled these attacks as an attack of free speech and thus that the reaction is about defending free speech. As such, much has been written about how it's possible to defend the right Charlie Hebdo has to publish anything they want, even if it's offensive, even whilst criticising their choice of content. Freedom of speech is, I believe, essential for any sort of democracy to work. This doesn't suggest that if you have freedom of speech then you have a democracy (I don't believe in the UK, or arguably anywhere in Europe or north America there is functioning democracy; merely systemic corporate dictatorships disguised by corrupt elected faux-representatives), but without freedom of speech, you certainly don't have any ability to transparently hold people to account and thus corruption and abuse will obviously take hold. But freedom of speech is a choice, it's not something axiomatic to existence; it's something many people have chosen to attach huge importance to and as such will defend. But just because there are widely agreed reasons to defend the concept of freedom of speech doesn't mean that it's self-evidently a "better" idea than not having freedom of speech. To judge something as "better" than something else requires all manner of discussion of the state of human existence. Consequently, criticising people for not holding freedom of speech in the same regard as we claim to do is no different from criticising people for their choice of clothing, or housing, or diet, or career, or religious views. Of course in the UK, we don't have freedom of speech as an absolute concept nor does most of Europe. In the UK, "incitement to racial hatred" was established as an offence by the provisions of §§ 17-29 of the Public Order Act 1986, and The Criminal Justice and Public Order Act 1994 made publication of material that incited racial hatred an arrestable offence. It's not difficult to find stories of people getting arrested for saying things on twitter, the easiest is of course making bomb threats. UKIP (an extremist right wing political party in the UK) even managed to get police to visit a man who'd posted a series of tweets fact-checking UKIP policies. Thankfully, he wasn't arrested. The USA appears to hold freedom of speech as a much more inviolable concept. For example: The ACLU vigorously defends the right of neo-Nazis to march through a community filled with Holocaust survivors in Skokie, Illinois, but does not join the march; they instead vocally condemn the targeted ideas as grotesque while defending the right to express them. But whilst the outpouring in Paris and the crowds gathered as a statement of unity were warmly defiant, it is somewhat likely that rather more than physical violence that was being defied, and more than freedom of speech defended by the crowd. As Brian Klug wrote: Here is a thought experiment: Suppose that while the demonstrators stood solemnly at Place de la Republique the other night, holding up their pens and wearing their “je suis charlie” badges, a man stepped out in front brandishing a water pistol and wearing a badge that said “je suis cherif” (the first name of one of the two brothers who gunned down the Charlie Hebdo staff). Suppose he was carrying a placard with a cartoon depicting the editor of the magazine lying in a pool of blood, saying, “Well I’ll be a son of a gun!” or “You’ve really blown me away!” or some such witticism. How would the crowd have reacted? Would they have laughed? Would they have applauded this gesture as quintessentially French? Would they have seen this lone individual as a hero, standing up for liberty and freedom of speech? Or would they have been profoundly offended? And infuriated. And then what? Perhaps many of them would have denounced the offender, screaming imprecations at him. Some might have thrown their pens at him. One or two individuals — two brothers perhaps — might have raced towards him and (cheered on by the crowd) attacked him with their fists, smashing his head against the ground. All in the name of freedom of expression. He would have been lucky to get away with his life. It seems that some things you can say and governments will try and protect you. It would appear in much of Europe that blaspheming Islam is legally OK. But, as noted above, saying other things will get you arrested. The French "comedian" Dieudonné was arrested just 48 hours after the march through Paris on charges of "defending terrorism". Whilst not arrested, the UK Liberal Democrat MP David ward tweeted "Je suis Palestinian" during the Paris marches and criticised the presence of the Israeli prime minister, Netanyahu, subsequently eliciting a complaint from the Israeli ambassador to Britain. Of course the "world leaders" who gathered in Paris have a wonderful record themselves on "protecting" free speech. Charlie Hebdo did not practise satire by mocking select, specific targets, nor did they mock all major religions equally. It's been widely reported that they sacked a cartoonist for making an anti-Semitic remark and that: Jyllands-Posten, the Danish newspaper that published caricatures of the Prophet in 2005, reportedly rejected cartoons mocking Christ because they would "provoke an outcry" and proudly declared it would "in no circumstances ... publish Holocaust cartoons". But of course it comes down to the content of the publication. In this case the cartoons exist to ridicule, make fun of and offend members of one of the world's largest religions, Islam, by mocking their prophet Mohammed. As Amanda Taub writes: Dalia Mogahed, the Director of Research at the Institute for Social Policy and Understanding, explained that Mohammed is a beloved figure to Muslims, and "it is a human impulse to want to protect what's sacred to you". Mogahed compared the cartoons to the issue of flag-burning in the United States, noting that a majority of Americans favour a constitutional amendment to ban flag-burning for similar reasons: the flag is an important symbol of a national identity, and many Americans see flag-burning as an attack on that identity, or even on the country itself. That's not extremism or backwardness; it's about protecting something you cherish. In any large group of people, there will be the vast majority of sound mind and thought, and a small minority who are not. This is just the fact that all over the earth, humans are not all the same: there is some variance in health, intelligence, and every other aspect of what a human is. Any large sampling of humans will show the same set of variations. So if you offend a huge group of people, you will offend tall people and short people, rich people and poor people, fat people and thin people, violent people and peaceful people. Unsurprisingly, it would appear that the background of these killers suggests there is little to do with Islam there, and more to do with the their upbringing, family, education and integration with society. Thus even if you feel Charlie Hebdo's publications of the cartoons served some purpose (given their biased choice of target, their purpose does not seem to be an exercise in itself of freedom of speech), it should be obvious that by offending so many people, they were placing themselves in danger. The same is true of any sustained, systemic, deliberate offence to any of this planet's major religions, races, nationalities or any other grouping of humans which share values. So it becomes a balancing act between how much do you believe in the message you're publishing versus the risk you're putting yourself in. You can view the actions of Edward Snowden in this same context: he felt that the message he was delivering on the abuses of surveillance power carried out by governments across the world outweighed the significant danger he was putting himself in, and so both delivered the message and accepted the need to flee from his country, probably never to return, in fear of the consequences of his actions. Thankfully, throughout history, there have been people who have chosen to put themselves in the path of great harm (often losing their lives as a result) in order to report, expose, document and publicise matters which the wider public needed to know. Governments, monarchies and empires have crumbled when faced with popular revolt. So freedom of speech requires consideration. It is perfectly reasonable not to say something because you anticipate you won't enjoy the consequences. Most of us do not conduct our lives by going around saying anything and everything we want to our friends and family: if we did, we'd rapidly lose a lot of friends. The expression "biting your tongue" exists for a reason. Equally, it's perfectly reasonable for a news outlet to decide not to re-publish the Charlie Hebdo cartoons if they fear a violent response that they suspect the local police forces cannot prevent; not to mention just not wanting to offend so many people. I view as daft the idea that people should never choose not to publish something out of fear. People absolutely should choose not to publish, if they feel the risk to the things they hold dear is not outweighed by the message they're delivering. Everything in life is a trade-off and every action has consequences. Whilst I agree with the right to free speech, that does not imply saying anything you like is free of consequences. If it were, it would require that words have no meaning, and subsequently all communication is void: if anything you say has no consequence then you can say nothing. I am certainly not suggesting the murders were in any way justified, or that Islam or any other religion or collection of humans should be beyond criticism or even ridicule. At the end of the day, no human is perfect, and as such we can all benefit from a thorough dose of criticism once in a while. Every article I've linked to in this post repeats that such violence, regardless of the provocation, is abhorrent, and I agree with that: murder is never an acceptable response to any drawing, written or spoken word. But that doesn't mean that these events weren't predictable. Finally then we get to the insanely idiotic response from the UK government. That MI5 should have more powers that they don't need (they probably just need more money), and that we must deny terrorists "safe space" to communicate online. Which means banning encryption, which means it's impossible to use the internet for anyone. The home secretary, Theresa May said: We are determined that as far as possible there should be no safe spaces for terrorists to communicate. I would have thought that that should be a principle ... that could have been held by everybody, across all parties in this House of Commons. So of course, if terrorists can't communicate in private then no one can. Quickly, we've immediately gone from lazy labelling of events as an attack on free speech to a knee jerk response of "free speech yes, but you certainly can't have free speech in private, because you might be a terrorist". Again, it's a trade-off. I doubt that having such restrictions on communication will make the earth or this country safer for anyone and of course the impossibility of a controlled study means it cannot be proven one way or another. No security service is ever going to be flawless and from time to time very horrible things will continue to happen. I think most people are aware of this and accept this; we're all going to die after all. The loss of civil liberties though is certainly far more worrying to me. In theory, I would think these proposals so lunatic as to never see the light of day (it would be completely impossible to enforce for one thing - terrorists along with everyone else would learn to use stenography to encode their messages in pictures of cats, thus rendering their traffic no different to that of everyone else). Sadly, Labour have stated they don't believe their position to be that far away from the Conservatives, which is deeply worrying. Labour don't exactly have a great record on this area either given their previous ID card schemes and the introduction of detention-without-charge. What is needed is some transparency. We need an informed debate, with MI5 and GCHQ providing some independently verifiable facts and figures that demonstrate how they are being thwarted in what they're trying to do. We need to understand properly what the risk to us is, and most importantly we need to understand why these threats exist and what else we can do to make them decrease. I've never seen it said that any UK Government policy in the last 15 years has made the UK less of a target for such attacks. Maybe we should look at that before we start subjecting ourselves to Orwellian control. 2015-01-17T17:35:00Z

Functional Software Developer at Moixa Technology (Full-time)

Planet Haskell - 17 hours 59 min ago
Green energy / IoT startup Moixa Technology is seeking to add to our software team. We're a small team dedicated to developing technology that enables a radical change in the way that energy is used in the home. We are just ramping up to deliver a project (working with large energy companies) demonstrating how communities can use the Maslow system across a group of homes to share energy in that community. We will need to deploy software that addresses some challenging problems to make this possible. The code that we develop is built around providing services based on the hardware systems that we are deploying and so needs to work with the constraints and characteristics of the hardware & low level firmware we have built. We're looking for individuals with generalist approach that are willing and and able to participate in all aspect of design, implementation and operation of our platform. The candidate must be happy in a small team, and also able to work autonomously. We are expanding as we ramp up to deliver the next generation of control software to our increasing number of deployed system. This is an exciting moment for the company. Tasks: Design & implemetation of all parts of our software stack (web frontend & backend, data analytics, high-level code on IoT devices) Operations support (expected <20% of time) Our current stack involves: Scala (Akka, Play) / ClojureScript / Haskell Postgres, neo4j Raspberry PI / Arch Linux PIC32 microcontroller / C Skills and Requirements: Experience in one or more functional languages Familiarity with at least one database paradigm Linux scripting and operation Advantages: Familiarity with (strongly) typed functional languages (Haskell/ML/Scala) Embedded programming experience Experience in data analytics (Spark or similiar) Experience in IoT development Open Source contributions Moixa technology is based in central London (Primrose Hill), Salary depending on experience + performance based share options. Get information on how to apply for this position. 2015-01-15T07:07:37Z

Thought on JavaScript

Planet Haskell - 17 hours 59 min ago
Over the holidays I’ve been reading Douglas Crockford’s excellent book JavaScript: The Good Parts. About halfway through I came to the conclusion that JavaScript is an “anti-LISP”. There are many reasons to learn LISP, but none of them is “LISP is widely used in industry.” As Eric Raymond is famous words claim, knowing LISP will make you a better programmer. On the other hand there seems to be almost no reasons to learn JavaScript. It sure doesn’t seem to teach anything that’ll make you a better programmer. The only reason I can come up with is “JavaScript is widely used in industry.” 2015-01-14T00:00:00Z 2015-01-14T00:00:00Z

User Interfaces for Users

Planet Haskell - 17 hours 59 min ago
Summary: When designing a user interface, think about the user, and what they want to know. Don't just present the information you know.As part of my job I've ended up writing a fair amount of user interface code, and feedback from users has given me an appreciation of some common mistakes. Many user interfaces are designed to present information, and one of the key rules is to think about what the user wants to see, not just showing the information you have easily to hand. I was reminded of this rule when I was expecting a parcel. On the morning of Saturday 10/1/2015 I used a track my parcel interface to find:The interface presents a lot of information, but most of it is interesting to the delivery company, not to me. I have basically one question: when will my parcel arrive. From the interface I can see:The parcel is being shipped with "Express AM" delivery. No link to what that means. Searching leads me to a page saying that guarantees delivery before 1pm on a business day (some places said noon, some said 1pm). That is useful information if I want to enter into a contract with the delivery service, but not when I'm waiting for a parcel. What happens on Saturday? Do they still deliver, just not guarantee the time? Do they wait until the next business day? Do they do either, as they see fit?My parcel has been loaded onto a vehicle. Has the vehicle has left the depot, or is that a separate step? How many further steps are there between loading and delivery? This question is easy to answer after the parcel has been delivered, since the additional steps show up in the list, but difficult to answer before.On Saturday morning my best guess about when the parcel would arrive was between then and Monday 1pm. Having been through the complete process, I now know the best answer was between some still unknown time on Monday morning and Monday 1pm. With that information, I'd have taken my son to the park rather than keeping him cooped up indoors.I suggest the company augment their user interface with the line "Your parcel will be delivered on Monday, between 9am and 1pm". It's information they could have computed easily, and answers my question.The eagle-eyed may notice that there is a plus to expand to show more information. Alas, all that shows is:I think they're trying to say my puny iPhone can't cope with the bold font that is essential to tell me the status and I should get an iPhone plus... Checking the desktop website also showed no further information. 2015-01-13T18:56:00Z Neil Mitchell noreply@blogger.com

FP Complete's software pipeline

Planet Haskell - 17 hours 59 min ago
FP Complete's mission is easily expressed: increase the commercial adoption of Haskell. We firmly believe that- in many domains- Haskell is the best way to write high quality, robust, performant code in a productive (read: fast-to-market) way. Those of you who, like me, are members of the Haskell community are probably using Haskell because you believe the same thing, and believe- like us- that we can make the world a better place by getting more software to be written in Haskell.By the way: FP Complete is looking to expand its team of Haskell developers. If this idea excites you, don't forget to send us your resumes!There are two interesting groups that I've spelled out in that paragraph: commercial users of Haskell, and the Haskell community. I want to clarify how our software development process interacts with these two groups, in part to more fully answer a question from Reddit.The Haskell community has created, and continues to create, amazing software. As a company, our main objective is to help other companies find this software, understand its value, and knock down any hurdles to adoption that they may have. These hurdles include:Lack of expertise in Haskell at the companyLimitations in the ecosystem, such as missing libraries for a particular domainProviding commercial support, such as high-availability hosted solutions and on-call engineers to provide answers and fix problemsProvide tooling as needed by commercial usersYou can already see quite a bit of what we've done, for example:Create FP Haskell Center, which addressed requests from companies to provide a low-barrier-to-entry Haskell development environment for non-Haskell expertsPut together School of Haskell to enable interactive documentation to help both new Haskellers, and those looking to improve their skillsStart the LTS Haskell project as a commercial-grade Haskell package set, based on our previous work with FP Haskell Center libraries and StackageProvide Stackage Server as a high-availability means of hosting package sets, both official (like Stackage) and unofficial (like experimental package releases)There's something all of these have in common, which demonstrates what our software pipeline looks like:We start off gathering requirements from companies- both those that are and are not our customers- to understand needsWe create a product in a closed-source environmentIterate with our customers on the new product to make sure it addresses all of their needsAfter the product reaches a certain point of stability (a very subjective call), we decide to release it to the community, which involves:Polishing itDiscussing with relevant members/leaders in the communityMaking it officially availableNot every product we work on goes through all of these steps. For example, we might decide that the product is too specialized to be generally useful. That's why we sometimes hold our cards a bit close to our chest: we don't want to talk about every new idea we have, because we know some of them may be duds.Some people may ask why we go through that fourth step I listed above. After all, taking a product from "it works well for individual companies with ongoing support from us" to "it's a generally viable product for commercial and non-commercial users" is an arduous process, and doesn't directly make us any money. The answer is simple, and I already alluded to it above: the great value in Haskell comes from the wonderful work the community does. If we're to succeed in our mission of getting Haskell to improve software development in general, we need all the help we can get.So that's our strategy. You're going to continue seeing new products released from us as we perfect them with our customers. We want to find every way we can to help the community succeed even more. I'm also making a small tweak to our strategy today: I want to be more open with the community about this process. While not everything we do should be broadcast to the world (because, like I said, some things may be duds), I can share some of our directions earlier than I have previously.So let me lay out some of the directions we're working on now:Better build tools. LTS Haskell is a huge step in that direction, providing a sane development environment. But there's still quite a bit of a manual process involved. We want to automate this even more. (And to directly answer hastor's question on Reddit: yes, we're going to release a Docker image.)Better code inspection. We've developed a lot of functionality as part of FP Haskell Center to inspect type signature, identifier locations, usage, etc. We want to unlock that power and make it available outside of FP Haskell Center as well.In a completely different direction: we're working on more powerful distributed computing capabilities. This is still early stage, so I can't say much more yet.Outside of products themselves, we want to get other companies on board with our goal of increased Haskell adoption as well. We believe many companies using Haskell today, and even more so companies considering making the jump, have a huge amount of ideas to add to the mix. We're still ironing out the details of what that will look like, but expect to hear some more from us in the next few months about this.And I'm giving you all a commitment: expect to see much more transparency about what we're doing. I intend to be sharing things with the community as we go along. Chris Done and Mathieu Boespflug will be part of this effort as well. If you have questions, ask. We want to do all we can to make the community thrive.
Syndicate content