Let’s say you need to force (evaluate) a lazy Haskell list.
A long time ago, this was a common way to fight lazy I/O: you read a String and then force it. These days you can have normal I/O with strict Text or ByteString instead.
Anyway, let’s say you do need to force a list. This came up in a pull request for lexer-applicative. Another scenario is if you want to evaluate a lazy Text or ByteString without copying the chunks. Or, you know, for any other reason.
First of all, how exactly do you want to force it? There are two primary ways: force the spine or force the elements too. (You can’t force the elements without forcing the spine.)
Forcing the spine means forcing all cons cells without touching the elements. One way to do that is to evaluate the length of the list, but that feels ad-hoc because it computes the result that is not needed. Here’s an elegant way to walk the spine:
forceSpine :: [a] -> ()
forceSpine = foldr (const id) ()
(Obviously, you need to force the resulting () value, by calling evaluate or seq-ing it to something else, for any evaluation to take place.)
const id, also known as flip const, returns its second argument while ignoring the first. So the evaluation goes like this:
forceSpine [x1, x2]
= foldr (const id) () [x1, x2]
= (const id) x1 $ foldr (const id) () [x2]
= foldr (const id) () [x2]
= (const id) x2 $ foldr (const id) () []
= foldr (const id) () []
= ()
See how forceSpine “unpacks” the list (thus forcing the spine) and throws all elements away.
I mentioned that you may also want to force the elements of the list, too. Most of the time you want to deep-force them, and so you should just rnf the whole list. Even when the elements are atomic (like Char or Int), evaluating them to weak head normal form is still equivalent to rnf.
But occasionally you do want to shallow-force the elements. In that case, simply replace const id with seq in the definition of forceSpine to obtain forceElements:
forceElements :: [a] -> ()
forceElements = foldr seq ()
Again, looking at the evaluation chain helps to understand what’s going on:
forceElements [x1, x2]
= foldr seq () [x1, x2]
= seq x1 $ foldr seq () [x2]
= foldr seq () [x2]
= seq x2 $ foldr seq () []
= foldr seq () []
= ()
Same as before, only elements get forced before being thrown away.
And here’s a table that may help you understand better the difference between seq, forceSpine, forceElements and rnf:
list
`seq` ()
forceSpine
forceElements
rnf
[Just True]
()
()
()
()
[Just undefined]
()
()
()
undefined
[undefined]
()
()
undefined
undefined
True : undefined
()
undefined
undefined
undefined
undefined
undefined
undefined
undefined
undefined
Since forceSpine and forceElements are based on foldr, they can be trivially generalized to any Foldable container, with the caveat that you should understand how the container and its Foldable instance work. For example, forceSpine is useless for Data.Map.Map, since it is already spine-strict, and forceElements for a tuple will only force its second element.
2015-05-28T20:00:00Zhttp://ro-che.info/articles/2015-05-28-force-list

Categories: Functional Programming

Over the last few days I wrote prover, a program that can reduce terms and prove equality in the untyped lambda calculus.
Motivation
Such a tool ought to exist, but I couldn’t find anything like that.
Occasionally I want to prove stuff, such as Monad or Applicative laws for various types. Very seldom such proofs require induction; mostly they involve simple reductions and equational reasoning.
Sometimes I specifically want to do it in the untyped lambda calculus just to prove a point. Other times, I’m interested in a typed theory (say, System F as an approximation for Haskell). Fortunately, the subject reduction property guarantees us that the results proven in ULC will hold in System F just as well.
Algebraic datatypes are more tricky. From the practical perspective, I could add data declarations and case expressions to prover, and desugar them via the Church encoding, just like I did with Maybe by hand. But there’s also the theoretical side of proving that the results about Church-encoded types translate back into the original language. Intuitively, that should hold, but I’d appreciate links to proper proofs if anyone is aware of them.
Finally, part of the motivation was to experiment with some pieces of Haskell tech that I don’t use in my day-to-day work and evaluate them. This part was certainly successful; I may share some specific impressions later. Until then, feel free to read the code; at this point it’s not too convoluted and yet I’m sure you’ll find some interesting bits there.
Demo
Here is an example invocation that establishes the right identity monad law for the reader monad:
% prover -i examples/reader.lam --equal 'bind a return' a
bind a return
{- inline bind -}
= (λb c d. c (b d) d) a return
{- inline return -}
= (λb c d. c (b d) d) a (λb c. b)
{- β-reduce -}
= (λb c. b (a c) c) (λb c. b)
{- β-reduce -}
= λb. (λc d. c) (a b) b
{- β-reduce -}
= λb. (λc. a b) b
{- β-reduce -}
= λb. a b
{- η-reduce -}
= a
The file examples/reader.lam (included in the repository) contains the definitions of return and bind:
return = \x r. x ;
bind = \a k r. k (a r) r ;
You can also ask prover to reduce an expression:
% prover -i examples/arith.lam --reduce 'mult two three'
λa b. a (a (a (a (a (a b)))))
Files are optional; you can give an entire term on the command line:
% prover --reduce '(\x. y x) (\z . z)'
y (λa. a)
prover uses De Bruijn indices, so bound variable names are not preserved.
One thing to note is that right now --reduce reports a fixed point of the reduction pipeline and not necessarily a normal form:
% prover --reduce '(\x. x x) (\x. x x)'
(λa. a a) (λa. a a)
If prover can’t find a reduced form, it will say so:
% prover --reduce '(\x. x x) (\x. x x x)'
No reduced form found
prover has a couple of knobs for more complicated cases. There is the number of iterations configured with --fuel; and for the --equal mode there is --size-limit, which instructs the tool to ignore large terms. E.g. this invocation completes immediately:
% prover -i examples/pair.lam -i examples/bool.lam -i examples/arith.lam --reduce 'pred one'
λa b. b
But in order to get a nice proof for the same reduction, you’ll need to find the right limits and wait for about 7 seconds. You will also be surprised how non-trivial the proof is.
% prover -i examples/pair.lam -i examples/bool.lam -i examples/arith.lam \
--equal 'pred one' zero \
--fuel 50 --size-limit 40
pred one
{- inline pred -}
= (λa b c. snd (a (λd. pair true (fst d b (λe. e) (snd d))) (pair false c))) one
[...]
= λa b. pair true ((λc. c (λd e. e) b) (λc d. c) a (λc. c) ((λc. c (λd e. e) b) (λc d. d))) (λc d. d)
[...]
= λa b. b
zero
{- inline zero -}
= λa b. b
This is because --equal has to consider all reduction paths to find the minimal one, and there are too many different ways to reduce this term.
Finally, its majesty factorial:
% prover -i examples/pair.lam -i examples/bool.lam -i examples/arith.lam -i examples/fixpoint.lam \
--fuel 20 \
--reduce 'fact three'
λa b. a (a (a (a (a (a b)))))
(I didn’t manage to compute fact four, though.)
2015-05-27T20:00:00Zhttp://ro-che.info/articles/2015-05-27-prover

Categories: Functional Programming

Grade inflation. It’s terrible, horrible, no good, very bad, and ruining everything. …right? Well… I’m not so sure. What I do know is that the typical conversation around grade inflation frustrates me. At best, it often leaves many important assumptions unstated and unquestioned. Is … Continue reading →

Categories: Functional Programming

For the last few months I have been working on extending Glasgow Haskell Compiler with injective type families. At first this seemed like a fairly simple project but in the end it turned out to be much more interesting than initially thought. (I suppose that’s how research mostly turns out.) There are still some rough […]

Categories: Functional Programming

In the last couple posts I've used some 'free' constructions, and not remarked too much on how they arise. In this post, I'd like to explore them more. This is going to be something of a departure from the previous posts, though, since I'm not going to worry about thinking precisely about bottom/domains. This is [...]

Categories: Functional Programming

Functional Programming in a Stateful World
2015-05-26T05:05:10Z

Categories: Functional Programming

Introduction
When writing some code recently, I came across a very interesting Applicative Functor. I wanted to write about it for two reasons:
It really shows the power of Applicative (compared to Monad). Applicative does not require access to previously computed results, which helps in this case, because it allows us to execute statements in whatever order is convenient.
I think it is novel, I was digging for a bit and could not find a similar Applicative in any Haskell code.
This blogpost is written in literate Haskell so you should be able to just load it up in GHCi and play around with it (you can find the raw .lhs file here).
> {-# LANGUAGE BangPatterns #-}
> {-# LANGUAGE GADTs #-}
> {-# LANGUAGE ScopedTypeVariables #-}
> import Control.Applicative (Applicative (..), (<$>))
> import Control.Monad (forM, liftM, liftM2)
> import Control.Monad.State (State, runState, state)
> import Unsafe.Coerce (unsafeCoerce)
> import Data.List (sortBy)
> import qualified Data.Map as M
> import Data.Ord (comparing)
> import qualified Data.OrdPSQ as PSQ
> import Data.Traversable (traverse)
> import qualified Data.Vector as V
> import GHC.Exts (Any)
The problem
In our example, we will be modeling a dessert restaurant.
> type Dessert = String
We keep the inventory of our restaurant simply as a list. The important invariant here is that the inventory is always ordered from cheapest to most expensive.
> type Inventory = [Dessert]
> defaultInventory :: Inventory
> defaultInventory =
> [ "Pancake"
> , "Apple Pie"
> , "Apple Pie"
> , "Tiramisu"
> ]
Whenever a client wants to order something, they have two options:
Request a specific dessert;
Just get the cheapest one we have available.
In the first case, they will not get served anything if the specific dessert is out of stock. In the second case, they will only miss out on a dessert when our inventory is completely empty.
> data Request
> = RequestSpecificDessert Dessert
> | RequestCheapestDessert
> deriving (Show)
Let’s implement the logic for serving a request. We use State Inventory to keep track of what’s available.
> doRequest :: Request -> State Inventory (Maybe Dessert)
For RequestCheapestDessert, we make use of the fact that our inventory is sorted by price. This means the head of the list is the cheapest dessert, so we serve that and put the tail of the list (xs) back.
We can do that conveniently using the state function, which allows us to modify the state and compute a result at the same time:
state :: (s -> (a, s)) -> State s a
The implementation becomes:
> doRequest RequestCheapestDessert =
> state $ \inventory -> case inventory of
> [] -> (Nothing, [])
> (dessert : xs) -> (Just dessert, xs)
In case the client wants a specific dessert, we use break to take out the requested item from the inventory list.
> doRequest (RequestSpecificDessert requested) =
> state $ \inventory -> case break (== requested) inventory of
> (xs, dessert : ys) -> (Just dessert, xs ++ ys)
> (xs, []) -> (Nothing, xs)
> test01 = runState (doRequest RequestCheapestDessert) defaultInventory
> test02 = runState (doRequest (RequestSpecificDessert "Apple Pie")) defaultInventory
Let’s check if this works:
*Main> runState (doRequest RequestCheapestDessert) defaultInventory
(Just "Pancake",["Apple Pie","Apple Pie","Tiramisu"])
*Main> runState
(doRequest (RequestSpecificDessert "Apple Pie")) defaultInventory
(Just "Apple Pie",["Pancake","Apple Pie","Tiramisu"])
Looking good so far!
Because our restaurant wants to make as much money as possible, we choose to first serve the people who order a specific dessert. In order to do that, we have a ‘Priority’ type and each kind of request maps to a priority. Lower numbers means higher priority.
> type Priority = Int
> requestPriority :: Request -> Priority
> requestPriority (RequestSpecificDessert _) = 0
> requestPriority RequestCheapestDessert = 1
Now let’s see what happens when a bunch of friends visit our restaurant.
> friendsRequests :: [Request]
> friendsRequests =
> [ RequestCheapestDessert
> , RequestSpecificDessert "Apple Pie"
> , RequestCheapestDessert
> , RequestSpecificDessert "Pancake"
> , RequestSpecificDessert "Crème brûlée"
> ]
Easy: we first sort the requests by priority, and then we apply doRequest on every Request. We keep the requests so we know which Dessert corresponds to which Request.
> doRequests :: [Request] -> State Inventory [(Request, Maybe Dessert)]
> doRequests requests =
> forM (sortBy (comparing requestPriority) requests) $
> \req -> (,) req <$> doRequest req
Let’s run this for our example to see if it worked and if we got the priorities right:
> test03 = runState (doRequests friendsRequests) defaultInventory
*Main> runState (doRequests friendsRequests) defaultInventory
( [ (RequestSpecificDessert "Apple Pie", Just "Apple Pie")
, (RequestSpecificDessert "Pancake", Just "Pancake")
, (RequestSpecificDessert "Crème brûlée", Nothing)
, (RequestCheapestDessert, Just "Apple Pie")
, (RequestCheapestDessert, Just "Tiramisu")
]
, []
)
Works great! However, it gets trickier. What if, instead of just a list, we have something with a bit more structure:
> data Family a = Family
> { familyParent1 :: a
> , familyParent2 :: a
> , familyChildren :: V.Vector a
> } deriving (Show)
And we want to implement:
> doFamilyRequests
> :: Family Request -> State Inventory (Family (Maybe Dessert))
> doFamilyRequests = error "Implement me"
How do we go about that? Instead of just sorting by priority, we need to tag which request belongs to which parent or child, then sort them, and… it gets messy – especially if the problem becomes more complicated. Imagine, for example, that children get given a bit more priority. It would be cool if we could separate the evaluation order (priority) from our actual logic.
Fortunately, there is an Applicative Functor which solves exactly this problem.
The Prio Applicative
The Prio Applicative has three type parameters:
p: The priority type, typically something like Int or Double;
m: The Monad we are annotating with priorities, for example State Inventory;
a: Our result type.
We use a GADT which mirrors the interface of Applicative, and one additional constructor, which holds a monadic action together with its priority 1.
> data Prio p m a where
> Pure :: a -> Prio p m a
> App :: Prio p m (a -> b) -> Prio p m a -> Prio p m b
> Prio :: p -> m a -> Prio p m a
For reference, here is the interface of Applicative again:
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
We can define a functor instance in terms of Applicative:
> instance Functor (Prio p m) where
> fmap f = App (Pure f)
And we can use the constructors to implement the Applicative instance:
> instance Applicative (Prio p m) where
> pure = Pure
> (<*>) = App
Now, we probably want to hide the actual constructors from the users and just provide a simple interface. Our interface consists of three functions:
prio annotates a monadic action with a priority;
modifyPrio modifies the priorities in a Prio value;
runPrio evaluates the Prio to the base Monad.
The implementation of prio is straightforward:
> prio :: p -> m a -> Prio p m a
> prio = Prio
A simple implementation of modifyPrio walks through the tree and modifies priorities (Prio nodes) as it encounters them 2.
> modifyPrio :: forall p m a. (p -> p) -> Prio p m a -> Prio p m a
> modifyPrio f = go
> where
> go :: forall b. Prio p m b -> Prio p m b
> go (Pure x) = Pure x
> go (App x y) = App (go x) (go y)
> go (Prio p x) = Prio (f p) x
runPrio also has a simple implementation: we find the minimal priority, and then evaluate all actions having this priority. When no priorities are left, we can use unsafeEvaluate to evaluate the whole tree 3.
> runPrio :: (Monad m, Ord p) => Prio p m a -> m a
> runPrio os = case findMinimalPriority os of
> Just p -> evaluatePriority p os >>= runPrio
> Nothing -> return $ unsafeEvaluate os
The three auxiliary functions used here findMinimalPriority, evaluatePriority and unsafeEvaluate should be hidden from the user-facing API (except perhaps findMinimalPriority). Let’s look at how these functions are implemented next.
findMinimalPriority simply goes through the Prio value and returns the minimal priority.
> findMinimalPriority
> :: forall p m a. (Monad m, Ord p)
> => Prio p m a -> Maybe p
> findMinimalPriority = go Nothing
> where
> go :: forall b. Maybe p -> Prio p m b -> Maybe p
> go !acc (Pure _) = acc
> go !acc (App x y) = go (go acc x) y
> go !Nothing (Prio p _) = Just p
> go !(Just !p0) (Prio p _) = Just (min p0 p)
evaluatePriority evaluates all nodes with a priority equal or less than the given priority. We do so by replacing this Prio constructor by a Pure constructor.
> evaluatePriority
> :: forall p m a. (Monad m, Ord p)
> => p -> Prio p m a -> m (Prio p m a)
> evaluatePriority p0 = go
> where
> go :: forall b. Prio p m b -> m (Prio p m b)
> go (Pure x) = return (Pure x)
> go (App x y) = liftM2 App (go x) (go y)
> go (Prio p f)
> | p <= p0 = liftM Pure f
> | otherwise = return (Prio p f)
After we have recursively called findMinimalPriority and evaluatePriority until all the Prio nodes are gone, we can call unsafeEvaluate to get our actual value out.
> unsafeEvaluate :: Prio p m a -> a
> unsafeEvaluate (Pure x) = x
> unsafeEvaluate (App x y) = (unsafeEvaluate x) (unsafeEvaluate y)
> unsafeEvaluate (Prio _ _) = error
> "unsafeEvaluate: internal error: some steps still unevaluated"
Usage example
We can now try this out. Remember the type of doRequest:
doRequest :: Request -> State Inventory (Maybe Dessert)
Let’s add a variant which uses the priority of the Request:
> prioRequest :: Request -> Prio Priority (State Inventory) (Maybe Dessert)
> prioRequest req = prio (requestPriority req) (doRequest req)
And for the whole family:
> prioFamilyRequests
> :: Family Request
> -> Prio Priority (State Inventory) (Family (Maybe Dessert))
> prioFamilyRequests family = Family
> <$> prioRequest (familyParent1 family)
> <*> prioRequest (familyParent2 family)
> <*> (modifyPrio (\x -> x - 1) $
> traverse prioRequest (familyChildren family))
Ain’t that clean code. Let’s test it out:
> familyRequest :: Family Request
> familyRequest = Family
> { familyParent1 = RequestCheapestDessert
> , familyParent2 = RequestSpecificDessert "Apple Pie"
> , familyChildren = V.fromList
> [ RequestCheapestDessert
> , RequestSpecificDessert "Pancake"
> , RequestSpecificDessert "Crème brûlée"
> ]
> }
> test04 = runState (runPrio $ prioFamilyRequests familyRequest) defaultInventory
*Main> runState (runPrio $ prioFamilyRequests familyRequest)
defaultInventory
( Family
{ familyParent1 = Just "Tiramisu"
, familyParent2 = Just "Apple Pie"
, familyChildren = fromList
[ Just "Apple Pie"
, Just "Pancake"
, Nothing
]
}
, []
)
Correct!
Conclusion
Prio is an interesting Applicative. I particularly like the fact that it works for every Monad (although it doesn’t make sense for some Monads such as Reader).
Use cases are rare. I’ve only encountered one and I could also have implemented it in a different way (although this feels a lot cleaner). However, I think a really important point about it is that it really illustrates the difference between Applicative and Monad very well.
Thanks to Alex Sayers, Jared Tobin and Maciej Wos for proofreading and discussions.
Appendix: a faster runPrio
> test05 = runState (fastRunPrio $ prioFamilyRequests familyRequest) defaultInventory
I have been requested to include the code for a faster runPrio, so here it is. As you might expect, it is not as clean as the original one.
The code runs in roughly three steps:
Build a queue which sorts all the elements by priority. In addition to the priority, we have an Int key per Prio node, determined by position.
Evaluate this queue in the arbitrary Monad m. As result we now get a Map which maps this Int key to the value (of type Any).
Run through the original Prio again, and whenever we encounter a Prio node, we use the Int key to lookup and unsafeCoerce the evaluated value from the Map.
> fastRunPrio :: forall p m a. (Monad m, Ord p) => Prio p m a -> m a
> fastRunPrio prio0 = do
> let (queue, _) = buildQueue 0 prio0 PSQ.empty
> m <- evaluateQueue queue M.empty
> let (x, _) = evalPrio m 0 prio0
> return x
> where
The three steps are implemented in three auxiliary methods, which you can find here:
> buildQueue
> :: forall b.
> Int
> -> Prio p m b
> -> PSQ.OrdPSQ Int p (m Any)
> -> (PSQ.OrdPSQ Int p (m Any), Int)
> buildQueue !i (Pure _) !acc = (acc, i)
> buildQueue !i (App x y) !acc =
> let (acc', i') = buildQueue i x acc in buildQueue i' y acc'
> buildQueue !i (Prio p mx) !acc =
> (PSQ.insert i p (liftM unsafeCoerce mx) acc, i + 1)
>
> evaluateQueue
> :: PSQ.OrdPSQ Int p (m Any)
> -> M.Map Int Any
> -> m (M.Map Int Any)
> evaluateQueue q !acc = case PSQ.minView q of
> Nothing -> return acc
> Just (k, _, mx, q') -> do
> x <- mx
> evaluateQueue q' (M.insert k x acc)
>
> evalPrio
> :: forall b.
> M.Map Int Any
> -> Int
> -> Prio p m b
> -> (b, Int)
> evalPrio m !i (Pure x) = (x, i)
> evalPrio m !i (App x y) =
> let (x', i') = evalPrio m i x
> (y', i'') = evalPrio m i' y
> in (x' y', i'')
> evalPrio m !i (Prio p mx) =
> (unsafeCoerce (m M.! i), i + 1)
It could also be implemented on top of the Free Applicative, but I have decided against that to keep this blogpost as simple as possible.↩
A faster (but less concise) implementation would be to add a ModifyPrio constructor, and evaluate all of these at once, so we only have to go through the tree once.↩
This implementation is very slow (quadratic in terms of the number of nodes in the Prio “tree”). I have found a faster way to implement this, but it is again less concise and requires the use of unsafeCoerce, so it is omitted from this blogpost. Update: I have included this method in the Appendix.↩
2015-05-26T00:00:00Z

Categories: Functional Programming

Surprised to discover myself the subject of a "Spotlight on Style" in Stylish Academic Writing by Helen Sword. Thanks to @ctford for the heads up. The segment is about titles (no surprise there) and also quotes Robby Findler, Simon Peyton-Jones, Ralf Laemmel, Sam Lindley, Simon Marlow, Martin Odersky, Enno Runne, and Jeremy Yallop.

Categories: Functional Programming

A powerful feature of Haskell’s type system is that we can deduce
properties of functions by looking _only at their type_. For example, a
function of type
``` {.haskell}
f :: ∀a. a -> a
```
can only be the identity function: since it must return something of
type a, for _any_ type a, the only thing [...]

Categories: Functional Programming

I have a new academic/professional website: http://cl.indiana.edu/~wren/!
There are still a few unfinished areas (e.g., my publications page), but hopefully I'll be finished with them shortly. The new site it built with Hakyll instead of my old Google-Summer-of-Code static website generator. I'm still learning how to implement my old workflow in Hakyll, and if I get the chance I'll write a few posts on how I've set things up. Reading through other folks' posts on how they use Hakyll have been very helpful, and I'd like to give back to the community. I've already issued a pull request for adding two new combinators for defining template fields.
In the meantime, if you notice any broken links from my old blog posts, please let me know. comments
2015-05-23T03:14:26Z

Categories: Functional Programming

The Commercial Haskell SIG members want to help people adopt Haskell. What would help? Data beats speculation, so FP Complete recently emailed surveys to over 16000 people interested in Haskell. The questions were aimed at identifying needs rather than celebrating past successes, and at helping applied users rather than researchers.Over 1240 people sent detailed replies, spending over 250 person-hours to provide their answers.This rich data set includes extensive information on Haskell user needs.
We are open-sourcing the entire anonymized data set, downloadable by clicking here [.zip].
There are numeric ratings and extensive textual comments. Feel free to analyze the data -- or just read the summaries -- and share your most helpful and actionable conclusions with the community. We will too.First ResultsAlthough we have completed only basic analysis, here are some of our first findings -- those so clear that they show up on even the most basic examination of the aggregate data.Satisfaction with the language, the compiler, and the community are high.Among non-students, 58% would recommend Haskell for a project at their workplace, but only 26% actually use it at work -- partly due to colleagues unfamiliar with Haskell who see it as requiring skills that are hard to obtain, or who need to see more success stories. Would improvement to colleagues' perceptions make a difference in the team's choice of Haskell for a project? 33% of respondents rated this "crucial" and another 26% said it would be "important", while only 16% said it would be a "slight help" or no help.Package management with cabal is the single worst aspect of using Haskell. Asked if improvements to package management would make a difference to their future choice of Haskell for a project, 38% said it would be "crucial" and a further 29% said it would be "important". Comments connected cabal with words like hell, pain, awful, sucks, frustrating, and hideous. Only this topic showed such grave dissatisfaction.Documentation improvements are a very high priority. For example, users need more concrete tutorials and templates showing them exactly what kinds of problems Haskell is good at solving, and exactly how to implement such programs completely. 65% of respondents said improvements to documentation and learning resources would be crucial or important, and a further 23% said they would be helpful. However, comments did not begin to approach the level of concern seen with cabal.Skills are a priority. Users need to see that people with Haskell skills are readily available, and that Haskell skills are quite feasible to learn. A majority of respondents said an improvement in availability of skilled personnel would make an important or crucial difference to them, and many also expressed concern about their or colleagues' abilities to learn the needed concepts and skills.We have started deeper statistical analysis of the data, and we hope that some readers of this post will perform -- and share -- even better analyses than we can. New issues may become clearer by clustering or segmenting users, or through other statistical techniques. Also, we may find more clarity about needs through deeper study of textual responses. Follow-up studies are also a possibility.We propose that the community, given this large and detailed data set,
should set some of its priorities in a data-driven manner focused on user-expressed needs.
This effort should be complementary to the ongoing research on issues of
historic Haskell strength such as language design and runtime architecture.Areas for Further WorkWe request that useful findings or insights derived from the open-sourced data set be shared with the community, including attribution of the source of the data and the methods of analysis used.The data collected strongly invites clustering or segmentation, so as to identify needs of different sub-populations. FP Complete has already begun one such study.The data collected includes extensive textual remarks which should be studied by knowledgeable people for insights. Automated text analysis methods may also be applicable.Cost-benefit analysis seems worthwhile: based on identified needs, what improvements would help the most people, to the greatest degree, at the least cost and/or the least delay? A method to match volunteer contributors with identified high-payoff projects also seems worthwhile.It would be useful to merge the data from versions 0.1 and 0.2 with version 1.0 of the survey, since 1.0 includes only 71% of the total answers received. Differences between the questions and scales make this a nontrivial, but still realistic, goal.If important new hypotheses require testing, or if further detail is needed, we intend to conduct follow-up research at some future date among users who volunteered their email addresses for follow-up questions.A future repeat survey could determine which improvement efforts are successful.Methodology NotesThis was not a survey of the general public, but of a population motivated to provide feedback on Haskell. Invitees included 16165 non-opted-out email addresses gathered from FP Complete's website, in randomized order. Due to privacy considerations this list will not be published, but FP Complete was able to use it to contact these users since the survey was directly related to their demonstrated interest in Haskell. The high quality of the list is reflected in the extremely high response rate (7.7%), the low bounce rate (1.9%), and the low unsubscribe rate (also 1.9%).Surveys were conducted using SurveyGizmo.com, with an email inviting each participant to click a link to a four-page Web-based survey. Survey form 0.1 invitations went to 1999 users of whom 190 completed the survey. Survey form 0.2, incorporating some edits, went to 2000 users of whom 170 completed the survey. Survey form 1.0, incorporating further edits, went to 12166 users of whom 894 completed the survey.Form 0.2 incorporated edits to eliminate questions yielding little information about how to help users, either because satisfaction was very high (the language itself, the compiler, the community) or because two questions were redundant. Also, new questions inspired by textual responses to form 0.1 were included.Form 1.0 incorporated further such edits. Also, the rating scale was changed to ask about helping the user's
(and team's) future choice of Haskell rather than current usefulness/difficulty.
The ratings questions were displayed under the heading
"Would improvements help you and your group to choose Haskell for your future work?"Responses were processed anonymously, but users were given the option to fill in their email address if they would accept follow-up questions, and the option to name their company/organization. Users were informed that the survey results, without these fields, would be shared with the community.AcknowledgmentsWe are grateful to the many, many people who spent their valuable time and expertise completing and returning their survey forms. Thanks to Dr. Tristan Webb and Ms. Noelle McCool-Smiley, both of FP Complete, for their material help in formulating and conducting the survey. Thanks to FP Complete's corporate customers for providing the revenues that allow us to fund this and other community projects. Thanks to the Commercial Haskell SIG for providing the motivation behind this project. Thanks to the many volunteers who've spent absolutely huge amounts of time and expertise making Haskell as good as it is today, and who continue to make improvements like those requested by the survey participants. Thanks to the companies that allow some of their staff to spend company time making such contributions to the common good. Special thanks to the late Professor Paul Hudak; may we all strive to live up to his example.

Categories: Functional Programming

tl;dr Please check out beta.stackage.orgI made the first commit to the Stackage Server code
base a little over a year ago. The
goal was to provide a place to host package sets which both limited the
number of packages from Hackage available, and modified packages where
necessary. This server was to be populated by regular Stackage builds, targeted
at multiple GHC versions, and consisted of both inclusive and exclusive sets.
It also allowed interested individuals to create their own package sets.If any of those details seem surprising today, they should. A lot has happened
for the Stackage project in the past year, making details of what was initially
planned irrelevant, and making other things (like hosting of package
documentation) vital. We now have LTS Haskell. Instead of running with multiple
GHC versions, we have Stackage Nightly which is targeted at a single GHC major
version. To accomodate goals for GPS Haskell (which unfortunately never
materialized),
Stackage no longer makes corrections to upstream packages.I could go into lots more detail on what is different in project requirements.
Instead, I'll just summarize: I've been working on a simplified version of the
Stackage Server codebase to address our goals better, more easily ensure high
availability, and make the codebase easier to maintain. We also used this
opportunity to test out a new hosting system our DevOps team put together. The
result is running on beta.stackage.org, and will
replace the official stackage.org after a bit more testing (which I hope
readers will help with).The codeAll of this code lives on the simpler branch of the stackage-server code
base, and much to my joy, resulted in quite a bit less
code. In fact,
there's just about a 2000 line reduction. The rest of this post will get into
how that happened.No more custom package setsOne of the features I mentioned above was custom package sets. This fell out
automatically from the initial way Stackage Server was written, so it was
natural to let others create package sets of their own. However, since release,
only one person actually used that feature. I discussed with him, and he agreed
with the decision to deprecate and then remove that functionality.So why get rid of it now? Two powerful reasons:We already host a public mirror of all packages on S3. Since we no longer patch upstream packages, it's best if tooling is able to just refer to that high-reliability service.We now have Git repositories for all of LTS Haskell and Stackage Nightly. Making these the sources of package sets means we don't have two (possibly conflicting) sources of data. That brings me to the second pointUpload code is goneWe had some complicated logic to allow users to upload package sets. It started
off simple, but over time we added Haddock hosting and other metadata features,
making the code more complex. Actually, it ended up having two parallel code
paths for this. So instead, we now just upload information on the package sets
to the Git repositories, and leave it up to a separate process (described
below) to clone these repositories and make the data available to the server.Haddocks on S3After generating a snapshot, the Haddocks used to be tarred and compressed, and
then uploaded as a compressed bundle to S3. Then, Stackage Server would receive
a request for files, unpack them, and serve them. This presented some problems:Users would have to wait for a first request to succeed during the unpackingWith enough snapshots being generated, we would eventually run out of disk space and need to clear our temp directorySince we run our cluster in a high availabilty mode with multiple horizontally-scaled machines, one machine may have finished unpacking when another didn't, resulting in unstyled content (see issue #82).Instead, we now just upload the files to S3 and redirect there from
stackage-server (though we'll likely switch to reverse proxying to allow for
nicer SSL urls). In fact, you can easily view these docs, at URLs such as
http://haddock.stackage.org/lts-2.9/ or
https://s3.amazonaws.com/haddock.stackage.org/nightly-2015-05-21/index.html.These Haddocks are publicly available, and linkable from projects beyond
Stackage Server. Each set of Haddocks is guaranteed to have consistent internal
links to other compatible packages. And while some documentation doesn't
generate due to known package
bugs,
the generation is otherwise reliable.I've already offered access to these docs to Duncan for usage on Hackage, and
hope that will improve the experience for users there.Metadata SQLite databasePreviously, information on snapshots was stored in a PostgreSQL database that
was maintained by Stackage Server. This database also had package metadata,
like author, homepage, and description. Now, we have a completely different
process:The all-cabal-metadata from the Commercial Haskell Special Interest Group provides an easily cloneable Git repo with package metadata, which is automatically updated by Travis.We run a cron job on the stackage-build server that updates the lts-haskell, stackage-nightly, and all-cabal-metadata repos and generates a SQLite database from them with all of the data that Stackage Server needs. You can look at the Stackage.Database module for some ideas of what this consists of. That database gets uploaded to Amazon S3, and is actually publicly available if you want to poke at itThe live server downloads a new version of this file on a regular basisI've considered spinning off the Stackage.Download code into its own repository
so that others can take advantage of this functionality in different contexts
if desired. Let me know if you're interested.At this point, the PostgreSQL database is just used for non-critical
functionality, such as social features (tags and likes).Slightly nicer URLsWhen referring to a snapshot, there are "official" short names (slugs), of the
form lts-2.9 and nightly-2015-05-22. The URLs on the new server
now reflect this perfectly, e.g.:
https://beta.stackage.org/nightly-2015-05-22.
We originally used hashes of the snapshot content for the original URLs, but
that was fixed a while
ago. Now that we only have
to support these official snapshots, we can always (and exclusively) use these
short names.As a convenience, if you visit the following URLs, you get automatic redirects:/nightly redirects to the most recent nightly/lts to the latest LTS/lts-X to the latest LTS in the X.* major version (e.g., today, /lts-2 redirects to /lts-2.9)This also works for URLs under that hierarchy. For example, consider
https://beta.stackage.org/lts/cabal.config,
which is an easy way to get set up with LTS in your project (by running wget
https://beta.stackage.org/lts/cabal.config).ECS-based hostingWhile not a new feature of the server itself, the hosting cluster we're running
this on is brand new. Amazon recently released EC2 Container Service, which is
a service for running Docker containers. Since we're going to be using this for
the new School of
Haskell, it's
nice to be giving it a serious usage now. We also make extensive use of Docker
for customer projects, both for builds and hosting, so it's a natural extension
for us.This ECS cluster uses standard Amazon services like Elastic Load Balancer (ELB)
and auto-scaling to provide for high availability in the case of machine
failure. And while we have a lot of confidence in our ability to keep Stackage
Server up and running regularly, it's nice that our most important user-facing
content is provided by these external services:Haddocks on S3Package mirroring on S3LTS Haskell and Stackage Nightly build plans on GithubPackage metadata on GithubPackage index metadata on Github (via stackage-update and all-cabal-files/hashes)This provides for a pleasant experience in both browsing the website and using
Stackage in your build system.A special thanks to Jason Boyer for providing this new hosting cluster, which
the whole FP Complete team is looking forward to putting through its paces.

Categories: Functional Programming

Summary: The development version of ghcid seemed to have some problems with terminating when Control-C was hit, so I investigated and learnt some things.Given a long-running/interactive console program (e.g. ghcid), when the user hits Control-C/Ctrl-C the program should abort. In this post I'll describe how that works in Haskell, how it can fail, and what asynchronous exceptions have to do with it.What happens when the user hits Ctrl-C?When the user hits Ctrl-C, GHC raises an async exception of type UserInterrupt on the main thread. This happens because GHC installs an interrupt handler which raises that exception, sending it to the main thread with throwTo. If you install your own interrupt handler you won't see this behaviour and will have to handle Ctrl-C yourself.There are reports that if the user hits Ctrl-C twice the runtime will abort the program. In my tests, that seems to be a feature of the shell rather than GHC itself - in the Windows Command Prompt no amount of Ctrl-C stops an errant program, in Cygwin a single Ctrl-C works.What happens when the main thread receives UserInterrupt?There are a few options:If you are not masked and there is no exception handler, the thread will abort, which causes the whole program to finish. This behaviour is the desirable outcome if the user hits Ctrl-C.If you are running inside an exception handler (e.g. catch or try) which is capable of catching UserInterrupt then the UserInterrupt exception will be returned. The program can then take whatever action it wishes, including rethrowing UserInterrupt or exiting the program.If you are running with exceptions masked, then the exception will be delayed until you stop being masked. The most common way of running while masked is if the code is the second argument to finally or one of the first two arguments to bracket. Since Ctrl-C will be delayed while the program is masked, you should only do quick things while masked.How might I lose UserInterrupt?The easiest way to "lose" a UserInterrupt is to catch it and not rethrow it. Taking a real example from ghcid, I sometimes want to check if two paths refer to the same file, and to make that check more robust I call canonicalizePath first. This function raises errors in some circumstances (e.g. the directory containing the file does not exist), but is inconsistent about error conditions between OS's, and doesn't document its exceptions, so the safest thing is to write:canonicalizePathSafe :: FilePath -> IO FilePathcanonicalizePathSafe x = canonicalizePath x `catch` \(_ :: SomeException) -> return xIf there is any exception, just return the original path. Unfortunately, the catch will also catch and discard UserInterrupt. If the user hits Ctrl-C while canonicalizePath is running the program won't abort. The problem is that UserInterrupt is not thrown in response to the code inside the catch, so ignoring UserInterrupt is the wrong thing to do.What is an async exception?In Haskell there are two distinct ways to throw exceptions, synchronously and asynchronously.Synchronous exceptions are raised on the calling thread, using functions such as throw and error. The point at which a synchronous exception is raised is explicit and can be relied upon.Asynchronous exceptions are raised by a different thread, using throwTo and a different thread id. The exact point at which the exception occurs can vary.How is the type AsyncException related?In Haskell, there is a type called AsyncException, containing four exceptions - each special in their own way:StackOverflow - the current thread has exceeded its stack limit.HeapOverflow - never actually raised.ThreadKilled - raised by calling killThread on this thread. Used when a programmer wants to kill a thread.UserInterrupt - the one we've been talking about so far, raised on the main thread by the user hitting Ctrl-C.While these have a type AsyncException, that's only a hint as to their intended purpose. You can throw any exception either synchronously or asynchronously. In our particular case of caonicalizePathSafe, if canonicalizePath causes a StackOverflow, we probably are happy to take the fallback case, but likely the stack was already close to the limit and will occur again soon. If the programmer calls killThread that thread should terminate, but in ghcid we know this thread won't be killed.How can I catch avoid catching async exceptions?There are several ways to avoid catching async exceptions. Firstly, since we expect canonicalizePath to complete quickly, we can just mask all async exceptions:canonicalizePathSafe x = mask_ $ canonicalizePath x `catch` \(_ :: SomeException) -> return xWe are now guaranteed that catch will not receive an async exception. Unfortunately, if canonicalizePath takes a long time, we might delay Ctrl-C unnecessarily.Alternatively, we can catch only non-async exceptions:canonicalizePathSafe x = catchJust (\e -> if async e then Nothing else Just e) (canonicalizePath x) (\_ -> return x)async e = isJust (fromException e :: Maybe AsyncException)We use catchJust to only catch exceptions which aren't of type AsyncException, so UserInterrupt will not be caught. Of course, this actually avoids catching exceptions of type AsyncException, which is only related to async exceptions by a partial convention not enforced by the type system.Finally, we can catch only the relevant exceptions:canonicalizePathSafe x = canonicalizePath x `catch` \(_ :: IOException) -> return xUnfortunately, I don't know what the relevant exceptions are - on Windows canonicalizePath never seems to throw an exception. However, IOException seems like a reasonable guess.How to robustly deal with UserInterrupt?I've showed how to make canonicalizePathSafe not interfere with UserInterrupt, but now I need to audit every piece of code (including library functions I use) that runs on the main thread to ensure it doesn't catch UserInterrupt. That is fragile. A simpler alternative is to push all computation off the main thread:import Control.Concurrent.Extraimport Control.Exception.ExtractrlC :: IO () -> IO ()ctrlC act = do bar <- newBarrier forkFinally act $ signalBarrier bar either throwIO return =<< waitBarrier barmain :: IO ()main = ctrlC $ ... as before ...We are using the Barrier type from my previous blog post, which is available from the extra package. We create a Barrier, run the main action on a forked thread, then marshal completion/exceptions back to the main thread. Since the main thread has no catch operations and only a few (audited) functions on it, we can be sure that Ctrl-C will quickly abort the program.Using version 1.1.1 of the extra package we can simplify the code to ctrlC = join . onceFork.What about cleanup?Now we've pushed most actions off the main thread, any finally sections are on other threads, and will be skipped if the user hits Ctrl-C. Typically this isn't a problem, as program shutdown automatically cleans all non-persistent resources. As an example, ghcid spawns a copy of ghci, but on shutdown the pipes are closed and the ghci process exits on its own. If we do want robust cleanup of resources such as temporary files we would need to run the cleanup from the main thread, likely using finally.Should async exceptions be treated differently?At the moment, Haskell defines many exceptions, any of which can be thrown either synchronously or asynchronously, but then hints that some are probably async exceptions. That's not a very Haskell-like thing to do. Perhaps there should be a catch which ignores exceptions thrown asynchronously? Perhaps the sync and async exceptions should be of different types? It seems unfortunate that functions have to care about async exceptions as much as they do.Combining mask and StackOverflowAs a curiosity, I tried to combine a function that stack overflows (using -O0) and mask. Specifically:main = mask_ $ print $ foldl (+) 0 [1..1000000]I then ran that with +RTS -K1k. That prints out the value computed by the foldl three times (seemingly just a buffering issue), then fails with a StackOverflow exception. If I remove the mask, it just fails with StackOverflow. It seems that by disabling StackOverflow I'm allowed to increase my stack size arbitrarily. Changing print to appendFile causes the file to be created but not written to, so it seems there are oddities about combining these features.DisclaimerI'm certainly not an expert on async exceptions, so corrections welcome. All the above assumes compiling with -threaded, but most applies without -threaded.
2015-05-21T20:19:00Z
Neil Mitchell
noreply@blogger.com

Categories: Functional Programming

You will:
Write backend services that power our listings , internal dashboards and platform. Our services are built on erlang,python and clojure.
Write a clean, fast and maintainable code using functional paradigms.
Research on latest trends in software and adapt them for business needs.
Contribute to open source libraries and write educational blogs.
Take care of health of production setup and ensure all systems are up and running at all times.
What are we looking for ?
Strong fundamentals of computer programming.
Familiarity with any functional programming language.
Have contributed to open source projects.
Have shown a real interest in learning programming.
Ability to learn new languages and frameworks.
In-depth knowledge of libraries and programming languages used on a day to day basis.
In-depth knowledge of rest services, http protocol and messaging services.
Comfortable with event driven programming and multi-threaded programming.
Good to have
Familiarity with angularjs/react.
Familiarity with clojure/lisp/haskell/erlang/sml/racket/scala.
Perks :
Freedom to experiment and evangelize latest technologies.
Best in industry salary and siginificant equity.
Leadership position and fast track career growth.
What are we doing?
We are a peer-to-peer marketplace for used cars. We accelerate p2p car transactions.
We want to offer the average Indian, a credible, easy alternative to buying an expensive new car.
As you read this, we are coding to make used car buying as systematic as ordering a phone off flipkart.
Where can I see what you have built thus far?
Download zoomo for android. On apple/ desktop check us out at www.gozoomo.com.
Why used cars?
Used cars is a $10B market, and will grow ten folds to $100B in the 7-10 year timeframe
Buying a used car make immense economic sense for a value-for-money economy like ours.
A car is a depreciating asset, often losing ~30% of its value in the first year itself
Why buy a mid segment hatcback for 6L when you can get an almost new premium hatchback/ Sedan at 4L?
What is it that we are fixing?
Buying a used car is horribly difficult today. Used cars buying is one experience still untouched by advances in interfaces, devices, networks and technology in general.
As per our studies more than 70% of users who start searching for a used car drop off to buy an expensive, new car.
We are changing all that.
We are making people happy by saving them big bucks. By finding them cars they fall in love with.
So how are you solving this/ are different/ are confident that this is going to be big?
Meet us in person and we will spill the beans :) You will get great coffee and 2-way conveyance
Get information on how to apply for this position.
2015-05-20T10:40:39Z

Categories: Functional Programming

Because Haskell is a language of choice for many problem domains, and
for scales ranging from one-off scripts to full scale web services, we
are fortunate to by now have over 8,000 open source packages (and
a few commercial ones besides) available to build from. But in
practice, Haskell programming in the real world involves interacting
with myriad legacy systems and libraries. Partially because the
industry is far older than the comparatively recent strength of our
community. But further still, because quality new high-performance
libraries are created every day in languages other than Haskell, be it
intensive numerical codes or frameworks for shuffling bits across
machines. Today we are releasing inline-c, a package for writing
mixed C/Haskell source code that seamlessly invokes native and foreign
functions in the same module. No FFI required.The joys of programming with foreign codeImagine that you just found a C library that you wish to use for your
project. The standard workflow is to,check Hackage if a package with a set of bindings for that library
exists,if one does, program against that, orif it doesn't, write your own bindings package, using Haskell's
FFI.Writing and maintaining bindings for large C libraries is hard work.
The libraries are constantly updated upstream, so that the bindings
you find are invariably out-of-date, providing only partial coverage
of library's API, sometimes don't compile cleanly against the latest
upstream version of the C library or need convoluted and error-prone
conditional compilation directives to support multiple versions of the
API in the package. Which is a shame, because typically you only need
to perform a very specific task using some C library, using only
a minute proportion of its API. It can be frustrating for a bindings
package to fail to install, only because the binding for some function
that you'll never use doesn't match up with the header files of the
library version you happen to have installed on your system.This is especially true for large libraries that expose sets of
related but orthogonal and indepedently useful functions, such as
GTK+, OpenCV or numerical libraries such as the GNU Scientific Library
(GSL), NAG and IMSL. inline-c lets you call functions from these
libraries using the full power of C's syntax, directly from client
code, without the need for monolithic bindings packages. High-level
bindings (or "wrappers") may still be useful to wrap low-level details
into an idiomatic Haskell interface, but inline-c enables rapid
prototyping and iterative development of code that uses directly some
of the C library today, keeping for later the task of abstracting
calls into a high-level, type safe wrapper as needed. In short,
inline-c let's you "pay as you go" when programming foreign code.We first developed inline-c for use with numerical libraries, in
particular the popular and very high quality commercial
NAG library, for
tasks including ODE solving, function optimization, and interpolation.
If getting seamless access to the gold standard of fast and reliable
numerical routines is what you need, then you will be interested in
our companion package to work specifically with NAG,
inline-c-nag.A taste of inline-cWhat follows is just a teaser of what can be done with inline-c.
Please refer to the
Haddock documentation and
the README for
more details on how to use the showcased features.Let's say we want to use C's variadic printf function and its
convenient string formats. inline-c let's you write this function
call inline, without any need for a binding to the foreign function:{-# LANGUAGE QuasiQuotes #-}
{-# LANGUAGE TemplateHaskell #-}
import qualified Language.C.Inline as C
C.include ""
C.include ""
main :: IO ()
main = do
x <- [C.exp| int{ printf("Some number: %.2f\n", cos(0.5)) } |]
putStrLn $ show x ++ " characters printed."Importing Language.C.Inline brings into scope the Template Haskell
function include to include C headers ( and ),
and the exp quasiquoter for embedding expressions in C syntax in
Haskell code. Notice how inline-c has no trouble even with
C functions that have advanced calling conventions, such as variadic
functions. This is a crucial point: we have the full power of
C available at our fingertips, not just whatever can be shoe-horned
through the FFI.We can capture Haskell variables to be used in the C expression, such
as when computing x below:mycos :: CDouble -> IO CDouble
mycos x = [C.exp| double{ cos($(double x)) } |]The anti-quotation $(double x) indicates that we want to capture the
variable x from the Haskell environment, and that we want it to have
type double in C (inline-c will check at compile time that this is
a sensible type ascription).We can also splice in a block of C statements, and explicitly return
the result:C.include ""
-- | @readAndSum n@ reads @n@ numbers from standard input and returns
-- their sum.
readAndSum :: CInt -> IO CInt
readAndSum n = do
x <- [C.block| int {
int i, sum = 0, tmp;
for (i = 0; i < $(int n); i++) {
scanf("%d ", &tmp);
sum += tmp;
}
return sum;
} |]
print xFinally, the library provides facilities to easily use Haskell data in
C. For example, we can easily use Haskell ByteStrings in C:{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE QuasiQuotes #-}
import qualified Data.ByteString as BS
import Data.Monoid ((<>))
import Foreign.C.Types
import qualified Language.C.Inline as C
import Text.RawString.QQ (r)
C.context (C.baseCtx <> C.bsCtx)
-- | Count the number of set bits in a 'BS.ByteString'.
countSetBits :: BS.ByteString -> IO CInt
countSetBits bs = [C.block|
int {
int i, bits = 0;
for (i = 0; i < $bs-len:bs; i++) {
unsigned char ch = $bs-ptr:bs[i];
bits += (ch * 01001001001ULL & 042104210421ULL) % 017;
}
return bits;
}
|]In this example, we use the bs-len and bs-ptr anti-quoters to get
the length and pointer for a Haskell ByteString. inline-c has
a modular design: these anti-quoters are completely optional and can
be included on-demand. The C.context invocation adds the extra
ByteStrings anti-quoters to the base set. Similar facilities are
present to easily use Haskell Vectors as well as for invoking
Haskell closures from C code.Larger examplesWe have included various examples in the
inline-c and
inline-c-nag repositories.
Currently they're geared toward scientific and numerical computing,
but we would welcome contributions using inline-c in other fields.For instance,
gsl-ode.hs
is a great example of combining the strengths of C and the strengths
of Haskell to good effect: we use a function from C's
GNU Scientific Library for
solving ordinary differential equations (ODE) to solve
a Lorenz system, and
then take advantage of the very nice Chart-diagrams Haskell library
to display its x and z coordinates:In this example, the vec-ptr anti-quoter is used to get a pointer out
of a mutable vector:$vec-ptr:(double *fMut)Where fMut is a variable of type Data.Storable.Vector.Mutable.Vector
CDouble. Moreover, the fun anti-quoter is used to get a function
pointer from a Haskell function:$fun:(int (* funIO) (double t, const double y[], double dydt[], void * params))Where, funIO is a Haskell function of typeCDouble -> Ptr CDouble -> Ptr CDouble -> Ptr () -> IO CIntNote that all these anti-quoters (apart from the ones where only one
type is allowed, like vec-len or bs-ptr) force the user to specify
the target C type. The alternative would have been to write the
Haskell type. Either way some type ascription is unfortunately
required, due to a limitation of Template Haskell. We choose C type
annotations because in this way, the user can understand precisely and
state explicitly the target type of any marshalling.Note that at this stage, type annotations are needed, because it is
not possible to get the type of locally defined variables in Template Haskell.How it works under the hoodinline-c generates a piece of C code for most of the Template Haskell
functions and quasi-quoters function that it exports. So when you write[C.exp| double{ cos($(double x)) } |]a C function gets generated:double some_name(double x) {
return cos(x);
}This function is then bound to in Haskell through an automatically
generated FFI import declaration and invoked passing the right argument
-- the x variable from the Haskell environment. The types specified in
C are automatically translated to the corresponding Haskell types, to
generate the correct type signatures.Custom anti quoters, such as vec-ptr and vec-len, handle the C and
Haskell types independently. For example, when writing[C.block| double {
int i;
double res;
for (i = 0; i < $vec-len:xs; i++) {
res += $vec-ptr:(double *xs)[i];
}
return res;
} |]we'll get a function of typedouble some_name(int xs_len, double *xs_ptr)and on the Haskell side the variable xs will be used in conjuction
with some code getting its length and the underlying pointer, both to be
passed as arguments.Building programs that use inline-cThe C code that inline-c generates is stored in a file named like the
Haskell source file, but with a .c extension.When using cabal, it is enough to specify generated C source, and
eventual options for the C code:executable foo
main-is: Main.hs, Foo.hs, Bar.hs
hs-source-dirs: src
-- Here the corresponding C sources must be listed for every module
-- that uses C code. In this example, Main.hs and Bar.hs do, but
-- Foo.hs does not.
c-sources: src/Main.c, src/Bar.c
-- These flags will be passed to the C compiler
cc-options: -Wall -O2
-- Libraries to link the code with.
extra-libraries: -lm
...Note that currently cabal repl is not supported, because the C code is
not compiled and linked appropriately. However, cabal repl will fail
at the end, when trying to load the compiled C code, which means that we
can still use it to type check our package when developing.If we were to compile the above manually we could do$ ghc -c Main.hs
$ cc -c Main.c -o Main_c.o
$ ghc Foo.hs
$ ghc Bar.hs
$ cc -c Bar.c -o Bar_c.o
$ ghc Main.o Foo.o Bar.o Main_c.o Bar_c.o -lm -o MainExtending inline-cAs mentioned previously, inline-c can be extended by defining custom
anti-quoters. Moreover, we can also tell inline-c about more C types
beyond the primitive ones.Both operations are done via the Context data type. Specifically, the
Context contains a TypesTable, mapping C type specifiers to Haskell
types; and a Map of AntiQuoters. A baseCtx is provided specifying
mappings from all the base C types to Haskell types (int to CInt,
double to CDouble, and so on). Contexts can be composed using
their Monoid instance.For example, the vecCtx contains two anti-quoters, vec-len and
vec-ptr. When using inline-c with external libraries we often
define a context dedicated to said library, defining a TypesTable
converting common types find in the library to their Haskell
counterparts. For example
inline-c-nag defines a context
containing information regarding the types commonly using in the NAG
scientific library.See the Language.C.Inline.Context module documentation for more.C++ supportOur original use case for inline-c was always C oriented. However,
thanks to extensible contexts, it should be possible to build C++
support on top of inline-c, as we dabbled with in
inline-c-cpp. In this way,
one can mix C++ code into Haskell source files, while reusing the
infrastructure that inline-c provides for invoking foreign
functions. Since inline-c generates C wrapper functions for all
inline expressions, one gets a function with bona fide C linkage to
wrap a C++ call, for free. Dealing with C++ templates, passing C++
objects in and out and conveniently manipulating them from Haskell are
the next challenges. If C++ support is what you need, feel free to
contribute to this ongoing effort!Wrapping upWe meant inline-c as a simple, modular alternative to monolithic
binding libraries, borrowing the core concept of FFI-less programming of
foreign code from the
H project
and
language-c-inline. But
this is just the first cut! We are releasing the library to the
community early in hopes that it will accelerate the Haskell community's
embrace of quality foreign libraries where they exist, as an alternative
to expending considerable resources reinventing such libraries for
little benefit. Numerical programming, machine learning, computer
vision, GUI programming and data analysis come to mind as obvious areas
where we want to leverage existing quality code. In fact, FP Complete is
using inline-c today to enable quick access to all of NAG, a roughly
1.6K function strong library, for a large compute-intensive codebase. We
hope to see many more use cases in the future.

Categories: Functional Programming

Categories: Functional Programming

The Haskell language provides the following guarantee (with caveats): if two programs are equal according to equational reasoning then they will behave the same. On the other hand, Haskell does not guarantee that equal programs will generate identical performance. Consequently, Haskell library writers must employ rewrite rules to ensure that their abstractions do not interfere with performance.Now suppose there were a hypothetical language with a stronger guarantee: if two programs are equal then they generate identical executables. Such a language would be immune to abstraction: no matter how many layers of indirection you might add the binary size and runtime performance would be unaffected.Here I will introduce such an intermediate language named Morte that obeys this stronger guarantee. I have not yet implemented a back-end code generator for Morte, but I wanted to pause to share what I have completed so far because Morte uses several tricks from computer science that I believe deserve more attention.Morte is nothing more than a bare-bones implementation of the calculus of constructions, which is a specific type of lambda calculus. The only novelty is how I intend to use this lambda calculus: as a super-optimizer.NormalizationThe typed lambda calculus possesses a useful property: every term in the lambda calculus has a unique normal form if you beta-reduce everything. If you're new to lambda calculus, normalizing an expression equates to indiscriminately inlining every function call.What if we built a programming language whose intermediate language was lambda calculus? What if optimization was just normalization of lambda terms (i.e. indiscriminate inlining)? If so, then we would could abstract freely, knowing that while compile times might increase, our final executable would never change.RecursionNormally you would not want to inline everything because infinitely recursive functions would become infinitely large expressions. Fortunately, we can often translate recursive code to non-recursive code!I'll demonstrate this trick first in Haskell and then in Morte. Let's begin from the following recursive List type along with a recursive map function over lists:import Prelude hiding (map, foldr)data List a = Cons a (List a) | Nilexample :: List Intexample = Cons 1 (Cons 2 (Cons 3 Nil))map :: (a -> b) -> List a -> List bmap f Nil = Nilmap f (Cons a l) = Cons (f a) (map f l)-- Argument order intentionally switchedfoldr :: List a -> (a -> x -> x) -> x -> xfoldr Nil c n = nfoldr (Cons a l) c n = c a (foldr l c n)result :: Intresult = foldr (map (+1) example) (+) 0-- result = 9Now imagine that we disable all recursion in Haskell: no more recursive types and no more recursive functions. Now we must reject the above program because:the List data type definition recursively refers to itselfthe map and foldr functions recursively refer to themselvesCan we still encode lists in a non-recursive dialect of Haskell?Yes, we can!-- This is a valid Haskell program{-# LANGUAGE RankNTypes #-}import Prelude hiding (map, foldr)type List a = forall x . (a -> x -> x) -> x -> xexample :: List Intexample = \cons nil -> cons 1 (cons 2 (cons 3 nil))map :: (a -> b) -> List a -> List bmap f l = \cons nil -> l (\a x -> cons (f a) x) nilfoldr :: List a -> (a -> x -> x) -> x -> xfoldr l = lresult :: Intresult = foldr (map (+ 1) example) (+) 0-- result = 9Carefully note that:List is no longer defined recursively in terms of itselfmap and foldr are no longer defined recursively in terms of themselvesYet, we somehow managed to build a list, map a function over the list, and fold the list, all without ever using recursion! We do this by encoding the list as a fold, which is why foldr became the identity function.This trick works for more than just lists. You can take any recursive data type and mechanically transform the type into a fold and transform functions on the type into functions on folds. If you want to learn more about this trick, the specific name for it is "Boehm-Berarducci encoding". If you are curious, this in turn is equivalent to an even more general concept from category theory known as "F-algebras", which let you encode inductive things in a non-inductive way.Non-recursive code greatly simplifies equational reasoning. For example, we can easily prove that we can optimize map id l to l:map id l-- Inline: map f l = \cons nil -> l (\a x -> cons (f a) x) nil= \cons nil -> l (\a x -> cons (id a) x) nil-- Inline: id x = x= \cons nil -> l (\a x -> cons a x) nil-- Eta-reduce= \cons nil -> l cons nil-- Eta-reduce= lNote that we did not need to use induction to prove this optimization because map is no longer recursive. The optimization became downright trivial, so trivial that we can automate it!Morte optimizes programs using this same simple scheme:Beta-reduce everything (equivalent to inlining)Eta-reduce everythingTo illustrate this, I will desugar our high-level Haskell code to the calculus of constructions. This desugaring process is currently manual (and tedious), but I plan to automate this, too, by providing a front-end high-level language similar to Haskell that compiles to Morte:-- mapid.mt( \(List : * -> *)-> \( map : forall (a : *) -> forall (b : *) -> (a -> b) -> List a -> List b )-> \(id : forall (a : *) -> a -> a) -> \(a : *) -> map a a (id a))-- List(\(a : *) -> forall (x : *) -> (a -> x -> x) -> x -> x)-- map( \(a : *)-> \(b : *)-> \(f : a -> b)-> \(l : forall (x : *) -> (a -> x -> x) -> x -> x)-> \(x : *)-> \(Cons : b -> x -> x)-> \(Nil: x)-> l x (\(va : a) -> \(vx : x) -> Cons (f va) vx) Nil)-- id(\(a : *) -> \(va : a) -> va)This line of code is the "business end" of the program:\(a : *) -> map a a (id a)The extra 'a' business is because in any polymorphic lambda calculus you explicitly accept polymorphic types as arguments and specialize functions by applying them to types. Higher-level functional languages like Haskell or ML use type inference to automatically infer and supply type arguments when possible.We can compile this program using the morte executable, which accepts a Morte program on stdin, outputs the program's type stderr, and outputs the optimized program on stdout:$ morte < id.mt∀(a : *) → (∀(x : *) → (a → x → x) → x → x) → ∀(x : *) → (a → x → x) → x → xλ(a : *) → λ(l : ∀(x : *) → (a → x → x) → x → x) → lThe first line is the type, which is a desugared form of:forall a . List a -> List aThe second line is the program, which is the identity function on lists. Morte optimized away the map completely, the same way we did by hand.Morte also optimized away the rest of the code, too. Dead-code elimination is just an emergent property of Morte's simple optimization scheme.EqualityWe could double-check our answer by asking Morte to optimize the identity function on lists:-- idlist.mt( \(List : * -> *)-> \(id : forall (a : *) -> a -> a) -> \(a : *) -> id (List a))-- List(\(a : *) -> forall (x : *) -> (a -> x -> x) -> x -> x)-- id(\(a : *) -> \(va : a) -> va)Sure enough, Morte outputs an alpha-equivalent result (meaning the same up to variable renaming):$ ~/.cabal/bin/morte < idlist.mt∀(a : *) → (∀(x : *) → (a → x → x) → x → x) → ∀(x : *) → (a → x → x) → x → xλ(a : *) → λ(va : ∀(x : *) → (a → x → x) → x → x) → vaWe can even use the morte library to mechanically check if two Morte expressions are alpha-, beta-, and eta- equivalent. We can parse our two Morte files into Morte's Expr type and then use the Eq instance for Expr to test for equivalence:$ ghciPrelude> import qualified Data.Text.Lazy.IO as TextPrelude Text> txt1 <- Text.readFile "mapid.mt"Prelude Text> txt2 <- Text.readFile "idlist.mt"Prelude Text> import Morte.Parser (exprFromText)Prelude Text Morte.Parser> let e1 = exprFromText txt1Prelude Text Morte.Parser> let e2 = exprFromText txt2Prelude Text Morte.Parser> import Control.Applicative (liftA2)Prelude Text Morte.Parser Control.Applicative> liftA2 (==) e1 e2Right True$ -- `Right` means both expressions parsed successfully$ -- `True` means they are alpha-, beta-, and eta-equivalentWe can use this to mechanically verify that two Morte programs optimize to the same result.Compile-time computationMorte can compute as much (or as little) at compile as you want. The more information you encode directly within lambda calculus, the more compile-time computation Morte will perform for you. For example, if we translate our Haskell List code entirely to lambda calculus, then Morte will statically compute the result at compile time.-- nine.mt( \(Nat : *)-> \(zero : Nat)-> \(one : Nat)-> \((+) : Nat -> Nat -> Nat)-> \((*) : Nat -> Nat -> Nat)-> \(List : * -> *)-> \(Cons : forall (a : *) -> a -> List a -> List a)-> \(Nil : forall (a : *) -> List a)-> \( map : forall (a : *) -> forall (b : *) -> (a -> b) -> List a -> List b )-> \( foldr : forall (a : *) -> List a -> forall (r : *) -> (a -> r -> r) -> r -> r )-> ( \(two : Nat) -> \(three : Nat) -> ( \(example : List Nat) -> foldr Nat (map Nat Nat ((+) one) example) Nat (+) zero ) -- example (Cons Nat one (Cons Nat two (Cons Nat three (Nil Nat)))) ) -- two ((+) one one) -- three ((+) one ((+) one one)))-- Nat( forall (a : *)-> (a -> a)-> a-> a)-- zero( \(a : *)-> \(Succ : a -> a)-> \(Zero : a)-> Zero)-- one( \(a : *)-> \(Succ : a -> a)-> \(Zero : a)-> Succ Zero)-- (+)( \(m : forall (a : *) -> (a -> a) -> a -> a)-> \(n : forall (a : *) -> (a -> a) -> a -> a)-> \(a : *)-> \(Succ : a -> a)-> \(Zero : a)-> m a Succ (n a Succ Zero))-- (*)( \(m : forall (a : *) -> (a -> a) -> a -> a)-> \(n : forall (a : *) -> (a -> a) -> a -> a)-> \(a : *)-> \(Succ : a -> a)-> \(Zero : a)-> m a (n a Succ) Zero)-- List( \(a : *)-> forall (x : *)-> (a -> x -> x) -- Cons-> x -- Nil-> x)-- Cons( \(a : *)-> \(va : a)-> \(vas : forall (x : *) -> (a -> x -> x) -> x -> x)-> \(x : *)-> \(Cons : a -> x -> x)-> \(Nil : x)-> Cons va (vas x Cons Nil))-- Nil( \(a : *)-> \(x : *)-> \(Cons : a -> x -> x)-> \(Nil : x)-> Nil)-- map( \(a : *)-> \(b : *)-> \(f : a -> b)-> \(l : forall (x : *) -> (a -> x -> x) -> x -> x)-> \(x : *)-> \(Cons : b -> x -> x)-> \(Nil: x)-> l x (\(va : a) -> \(vx : x) -> Cons (f va) vx) Nil)-- foldr( \(a : *)-> \(vas : forall (x : *) -> (a -> x -> x) -> x -> x)-> vas)The relevant line is:foldr Nat (map Nat Nat ((+) one) example) Nat (+) zeroIf you remove the type-applications to Nat, this parallels our original Haskell example. We can then evaluate this expression at compile time:$ morte < nine.mt∀(a : *) → (a → a) → a → aλ(a : *) → λ(Succ : a → a) → λ(Zero : a) → Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ Zero))))))))Morte reduces our program to a church-encoded nine.Run-time computationMorte does not force you to compute everything using lambda calculus at compile time. Suppose that we wanted to use machine arithmetic at run-time instead. We can do this by parametrizing our program on:the Int type,operations on Ints, andany integer literals we useWe accept these "foreign imports" as ordinary arguments to our program:-- foreign.mt-- Foreign imports \(Int : *) -- Foreign type-> \((+) : Int -> Int -> Int) -- Foreign function-> \((*) : Int -> Int -> Int) -- Foreign function-> \(lit@0 : Int) -- Literal "1" -- Foreign data-> \(lit@1 : Int) -- Literal "2" -- Foreign data-> \(lit@2 : Int) -- Literal "3" -- Foreign data-> \(lit@3 : Int) -- Literal "1" -- Foreign data-> \(lit@4 : Int) -- Literal "0" -- Foreign data-- The rest is compile-time lambda calculus-> ( \(List : * -> *) -> \(Cons : forall (a : *) -> a -> List a -> List a) -> \(Nil : forall (a : *) -> List a) -> \( map : forall (a : *) -> forall (b : *) -> (a -> b) -> List a -> List b ) -> \( foldr : forall (a : *) -> List a -> forall (r : *) -> (a -> r -> r) -> r -> r ) -> ( \(example : List Int) -> foldr Int (map Int Int ((+) lit@3) example) Int (+) lit@4 ) -- example (Cons Int lit@0 (Cons Int lit@1 (Cons Int lit@2 (Nil Int)))) ) -- List ( \(a : *) -> forall (x : *) -> (a -> x -> x) -- Cons -> x -- Nil -> x ) -- Cons ( \(a : *) -> \(va : a) -> \(vas : forall (x : *) -> (a -> x -> x) -> x -> x) -> \(x : *) -> \(Cons : a -> x -> x) -> \(Nil : x) -> Cons va (vas x Cons Nil) ) -- Nil ( \(a : *) -> \(x : *) -> \(Cons : a -> x -> x) -> \(Nil : x) -> Nil ) -- map ( \(a : *) -> \(b : *) -> \(f : a -> b) -> \(l : forall (x : *) -> (a -> x -> x) -> x -> x) -> \(x : *) -> \(Cons : b -> x -> x) -> \(Nil: x) -> l x (\(va : a) -> \(vx : x) -> Cons (f va) vx) Nil ) -- foldr ( \(a : *) -> \(vas : forall (x : *) -> (a -> x -> x) -> x -> x) -> vas )We can use Morte to optimize the above program and Morte will reduce the program to nothing but foreign types, operations, and values:$ morte < foreign.mt∀(Int : *) → (Int → Int → Int) → (Int → Int → Int) → Int →Int → Int → Int → Int → Intλ(Int : *) → λ((+) : Int → Int → Int) → λ((*) : Int → Int → Int) → λ(lit : Int) → λ(lit@1 : Int) → λ(lit@2 : Int) → λ(lit@3 : Int) → λ(lit@4 : Int) → (+) ((+) lit@3 lit) ((+) ((+) lit@3 lit@1) ((+) ((+) lit@3 lit@2) lit@4))If you study that closely, Morte adds lit@3 (the "1" literal) to each literal of the list and then adds them up. We can then pass this foreign syntax tree to our machine arithmetic backend to transform those foreign operations to efficient operations.Morte lets you choose how much information you want to encode within lambda calculus. The more information you encode in lambda calculus the more Morte can optimize your program, but the slower your compile times will get, so it's a tradeoff.CorecursionCorecursion is the dual of recursion. Where recursion works on finite data types, corecursion works on potentially infinite data types. An example would be the following infinite Stream in Haskell:data Stream a = Cons a (Stream a)numbers :: Stream Intnumbers = go 0 where go n = Cons n (go (n + 1))-- numbers = Cons 0 (Cons 1 (Cons 2 (...map :: (a -> b) -> Stream a -> Stream bmap f (Cons a l) = Cons (f a) (map f l)example :: Stream Intexample = map (+ 1) numbers-- example = Cons 1 (Cons 2 (Cons 3 (...Again, pretend that we disable any function from referencing itself so that the above code becomes invalid. This time we cannot reuse the same trick from previous sections because we cannot encode numbers as a fold without referencing itself. Try this if you don't believe me.However, we can still encode corecursive things in a non-corecursive way. This time, we encode our Stream type as an unfold instead of a fold:-- This is also valid Haskell code{-# LANGUAGE ExistentialQuantification #-}data Stream a = forall s . MkStream { seed :: s , step :: s -> (a, s) }numbers :: Stream Intnumbers = MkStream 0 (\n -> (n, n + 1))map :: (a -> b) -> Stream a -> Stream bmap f (MkStream s0 k) = MkStream s0 k' where k' s = (f a, s') where (a, s') = k s In other words, we store an initial seed of some type s and a step function of type s -> (a, s) that emits one element of our Stream. The type of our seed s can be anything and in our numbers example, the type of the internal state is Int. Another stream could use a completely different internal state of type (), like this:-- ones = Cons 1 onesones :: Stream Intones = MkStream () (\_ -> (1, ()))The general name for this trick is an "F-coalgebra" encoding of a corecursive type.Once we encode our infinite stream non-recursively, we can safely optimize the stream by inlining and eta reduction:map id l-- l = MkStream s0 k= map id (MkStream s0 k)-- Inline definition of `map`= MkStream s0 k' where k' = \s -> (id a, s') where (a, s') = k s-- Inline definition of `id`= MkStream s0 k' where k' = \s -> (a, s') where (a, s') = k s-- Inline: (a, s') = k s= MkStream s0 k' where k' = \s -> k s-- Eta reduce= MkStream s0 k' where k' = k-- Inline: k' = k= MkStream s0 k-- l = MkStream s0 k= lNow let's encode Stream and map in Morte and compile the following four expressions:map ididmap f . map gmap (f . g)Save the following Morte file to stream.mt and then uncomment the expression you want to test:( \(id : forall (a : *) -> a -> a)-> \( (.) : forall (a : *) -> forall (b : *) -> forall (c : *) -> (b -> c) -> (a -> b) -> (a -> c) )-> \(Pair : * -> * -> *)-> \(P : forall (a : *) -> forall (b : *) -> a -> b -> Pair a b)-> \( first : forall (a : *) -> forall (b : *) -> forall (c : *) -> (a -> b) -> Pair a c -> Pair b c )-> ( \(Stream : * -> *) -> \( map : forall (a : *) -> forall (b : *) -> (a -> b) -> Stream a -> Stream b ) -- example@1 = example@2 -> ( \(example@1 : forall (a : *) -> Stream a -> Stream a) -> \(example@2 : forall (a : *) -> Stream a -> Stream a) -- example@3 = example@4 -> \( example@3 : forall (a : *) -> forall (b : *) -> forall (c : *) -> (b -> c) -> (a -> b) -> Stream a -> Stream c ) -> \( example@4 : forall (a : *) -> forall (b : *) -> forall (c : *) -> (b -> c) -> (a -> b) -> Stream a -> Stream c ) -- Uncomment the example you want to test -> example@1-- -> example@2-- -> example@3-- -> example@4 ) -- example@1 (\(a : *) -> map a a (id a)) -- example@2 (\(a : *) -> id (Stream a)) -- example@3 ( \(a : *) -> \(b : *) -> \(c : *) -> \(f : b -> c) -> \(g : a -> b) -> map a c ((.) a b c f g) ) -- example@4 ( \(a : *) -> \(b : *) -> \(c : *) -> \(f : b -> c) -> \(g : a -> b) -> (.) (Stream a) (Stream b) (Stream c) (map b c f) (map a b g) ) ) -- Stream ( \(a : *) -> forall (x : *) -> (forall (s : *) -> s -> (s -> Pair a s) -> x) -> x ) -- map ( \(a : *) -> \(b : *) -> \(f : a -> b) -> \( st : forall (x : *) -> (forall (s : *) -> s -> (s -> Pair a s) -> x) -> x ) -> \(x : *) -> \(S : forall (s : *) -> s -> (s -> Pair b s) -> x) -> st x ( \(s : *) -> \(seed : s) -> \(step : s -> Pair a s) -> S s seed (\(seed@1 : s) -> first a b s f (step seed@1)) ) ))-- id(\(a : *) -> \(va : a) -> va)-- (.)( \(a : *)-> \(b : *)-> \(c : *)-> \(f : b -> c)-> \(g : a -> b)-> \(va : a)-> f (g va))-- Pair(\(a : *) -> \(b : *) -> forall (x : *) -> (a -> b -> x) -> x)-- P( \(a : *)-> \(b : *)-> \(va : a)-> \(vb : b)-> \(x : *)-> \(P : a -> b -> x)-> P va vb)-- first( \(a : *)-> \(b : *)-> \(c : *)-> \(f : a -> b)-> \(p : forall (x : *) -> (a -> c -> x) -> x)-> \(x : *)-> \(Pair : b -> c -> x)-> p x (\(va : a) -> \(vc : c) -> Pair (f va) vc))Both example@1 and example@2 will generate alpha-equivalent code:$ morte < example1.mt∀(a : *) → (∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → xλ(a : *) → λ(st : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → st$ morte < example2.mt∀(a : *) → (∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → xλ(a : *) → λ(va : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → vaSimilarly, example@3 and example@4 will generate alpha-equivalent code:$ morte < example3.mt∀(a : *) → ∀(b : *) → ∀(c : *) → (b → c) → (a → b) → (∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (c → s → x) → x) → x) → xλ(a : *) → λ(b : *) → λ(c : *) → λ(f : b → c) → λ(g : a → b) → λ(st : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → λ(x : *) → λ(S : ∀(s : *) → s → (s → ∀(x : *) → (c → s → x) → x) → x) → st x (λ(s : *) → λ(seed : s) → λ(step : s → ∀(x : *) → (a → s → x) → x) → S s seed (λ(seed@1 : s) → λ(x : *) → λ(Pair : c → s → x) → step seed@1 x (λ(va : a) → Pair (f (g va)))))$ morte < example4.mt∀(a : *) → ∀(b : *) → ∀(c : *) → (b → c) → (a → b) → (∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (c → s → x) → x) → x) → xλ(a : *) → λ(b : *) → λ(c : *) → λ(f : b → c) → λ(g : a → b) → λ(va : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → λ(x : *) → λ(S : ∀(s : *) → s → (s → ∀(x : *) → (c → s → x) → x) → x) → va x (λ(s : *) → λ(seed : s) → λ(step : s → ∀(x : *) → (a → s → x) → x) → S s seed (λ(seed@1 : s) → λ(x : *) → λ(Pair : c → s → x) → step seed@1 x (λ(va : a) → Pair (f (g va))))We inadvertently proved stream fusion for free, but we're still not done, yet! Everything we learn about recursive and corecursive sequences can be applied to model recursive and corecursive effects!EffectsI will conclude this post by showing how to model both recursive and corecursive programs that have side effects. The recursive program will echo ninety-nine lines from stdin to stdout. The equivalent Haskell program is in the comment header:-- recursive.mt-- The Haskell code we will translate to Morte:---- import Prelude hiding (-- (+), (*), IO, putStrLn, getLine, (>>=), (>>), return )-- -- -- Simple prelude---- data Nat = Succ Nat | Zero---- zero :: Nat-- zero = Zero---- one :: Nat-- one = Succ Zero---- (+) :: Nat -> Nat -> Nat-- Zero + n = n-- Succ m + n = m + Succ n---- (*) :: Nat -> Nat -> Nat-- Zero * n = Zero-- Succ m * n = n + (m * n)---- foldNat :: Nat -> (a -> a) -> a -> a-- foldNat Zero f x = x-- foldNat (Succ m) f x = f (foldNat m f x)---- data IO r-- = PutStrLn String (IO r)-- | GetLine (String -> IO r)-- | Return r---- putStrLn :: String -> IO U-- putStrLn str = PutStrLn str (Return Unit)---- getLine :: IO String-- getLine = GetLine Return---- return :: a -> IO a-- return = Return---- (>>=) :: IO a -> (a -> IO b) -> IO b-- PutStrLn str io >>= f = PutStrLn str (io >>= f)-- GetLine k >>= f = GetLine (\str -> k str >>= f)-- Return r >>= f = f r---- -- Derived functions---- (>>) :: IO U -> IO U -> IO U-- m >> n = m >>= \_ -> n---- two :: Nat-- two = one + one---- three :: Nat-- three = one + one + one---- four :: Nat-- four = one + one + one + one---- five :: Nat-- five = one + one + one + one + one---- six :: Nat-- six = one + one + one + one + one + one---- seven :: Nat-- seven = one + one + one + one + one + one + one---- eight :: Nat-- eight = one + one + one + one + one + one + one + one---- nine :: Nat-- nine = one + one + one + one + one + one + one + one + one---- ten :: Nat-- ten = one + one + one + one + one + one + one + one + one + one---- replicateM_ :: Nat -> IO U -> IO U-- replicateM_ n io = foldNat n (io >>) (return Unit)---- ninetynine :: Nat-- ninetynine = nine * ten + nine---- main_ :: IO U-- main_ = replicateM_ ninetynine (getLine >>= putStrLn)-- "Free" variables( \(String : * )-> \(U : *)-> \(Unit : U) -- Simple prelude-> ( \(Nat : *) -> \(zero : Nat) -> \(one : Nat) -> \((+) : Nat -> Nat -> Nat) -> \((*) : Nat -> Nat -> Nat) -> \(foldNat : Nat -> forall (a : *) -> (a -> a) -> a -> a) -> \(IO : * -> *) -> \(return : forall (a : *) -> a -> IO a) -> \((>>=) : forall (a : *) -> forall (b : *) -> IO a -> (a -> IO b) -> IO b ) -> \(putStrLn : String -> IO U) -> \(getLine : IO String) -- Derived functions -> ( \((>>) : IO U -> IO U -> IO U) -> \(two : Nat) -> \(three : Nat) -> \(four : Nat) -> \(five : Nat) -> \(six : Nat) -> \(seven : Nat) -> \(eight : Nat) -> \(nine : Nat) -> \(ten : Nat) -> ( \(replicateM_ : Nat -> IO U -> IO U) -> \(ninetynine : Nat) -> replicateM_ ninetynine ((>>=) String U getLine putStrLn) ) -- replicateM_ ( \(n : Nat) -> \(io : IO U) -> foldNat n (IO U) ((>>) io) (return U Unit) ) -- ninetynine ((+) ((*) nine ten) nine) ) -- (>>) ( \(m : IO U) -> \(n : IO U) -> (>>=) U U m (\(_ : U) -> n) ) -- two ((+) one one) -- three ((+) one ((+) one one)) -- four ((+) one ((+) one ((+) one one))) -- five ((+) one ((+) one ((+) one ((+) one one)))) -- six ((+) one ((+) one ((+) one ((+) one ((+) one one))))) -- seven ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one)))))) -- eight ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one))))))) -- nine ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one)))))))) -- ten ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one))))))))) ) -- Nat ( forall (a : *) -> (a -> a) -> a -> a ) -- zero ( \(a : *) -> \(Succ : a -> a) -> \(Zero : a) -> Zero ) -- one ( \(a : *) -> \(Succ : a -> a) -> \(Zero : a) -> Succ Zero ) -- (+) ( \(m : forall (a : *) -> (a -> a) -> a -> a) -> \(n : forall (a : *) -> (a -> a) -> a -> a) -> \(a : *) -> \(Succ : a -> a) -> \(Zero : a) -> m a Succ (n a Succ Zero) ) -- (*) ( \(m : forall (a : *) -> (a -> a) -> a -> a) -> \(n : forall (a : *) -> (a -> a) -> a -> a) -> \(a : *) -> \(Succ : a -> a) -> \(Zero : a) -> m a (n a Succ) Zero ) -- foldNat ( \(n : forall (a : *) -> (a -> a) -> a -> a) -> n ) -- IO ( \(r : *) -> forall (x : *) -> (String -> x -> x) -> ((String -> x) -> x) -> (r -> x) -> x ) -- return ( \(a : *) -> \(va : a) -> \(x : *) -> \(PutStrLn : String -> x -> x) -> \(GetLine : (String -> x) -> x) -> \(Return : a -> x) -> Return va ) -- (>>=) ( \(a : *) -> \(b : *) -> \(m : forall (x : *) -> (String -> x -> x) -> ((String -> x) -> x) -> (a -> x) -> x ) -> \(f : a -> forall (x : *) -> (String -> x -> x) -> ((String -> x) -> x) -> (b -> x) -> x ) -> \(x : *) -> \(PutStrLn : String -> x -> x) -> \(GetLine : (String -> x) -> x) -> \(Return : b -> x) -> m x PutStrLn GetLine (\(va : a) -> f va x PutStrLn GetLine Return) ) -- putStrLn ( \(str : String) -> \(x : *) -> \(PutStrLn : String -> x -> x ) -> \(GetLine : (String -> x) -> x) -> \(Return : U -> x) -> PutStrLn str (Return Unit) ) -- getLine ( \(x : *) -> \(PutStrLn : String -> x -> x ) -> \(GetLine : (String -> x) -> x) -> \(Return : String -> x) -> GetLine Return ))This program will compile to a completely unrolled read-write loop, as most recursive programs will:$ morte < recursive.mt∀(String : *) → ∀(U : *) → U → ∀(x : *) → (String → x → x) → ((String → x) → x) → (U → x) → xλ(String : *) → λ(U : *) → λ(Unit : U) → λ(x : *) → λ(PutStrLn : String → x → x) → λ(GetLine : (String → x) → x) → λ(Return : U → x) → GetLine (λ(va : String) → PutStrLn va (GetLine (λ(va@1 : String) → PutStrLn va@1 (GetLine (λ(va@2 : String) → PutStrLn va@2 (GetLine (λ(va@3 : String) → PutStrLn ... ... GetLine (λ(va@92 : String) → PutStrLn va@92 (GetLine (λ(va@93 : String) → PutStrLn va@93 (GetLine (λ(va@94 : String) → PutStrLn va@94 (GetLine (λ(va@95 : String) → PutStrLn va@95 (GetLine (λ(va@96 : String) → PutStrLn va@96 (GetLine (λ(va@97 : String) → PutStrLn va@97 (GetLine (λ(va@98 : String) → PutStrLn va@98 (Return Unit))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))In contrast, if we encode the effects corecursively we can express a program that echoes indefinitely from stdin to stdout:-- corecursive.mt-- data IOF r s-- = PutStrLn String s-- | GetLine (String -> s)-- | Return r---- data IO r = forall s . MkIO s (s -> IOF r s)---- main = MkIO-- Nothing-- (maybe (\str -> PutStrLn str Nothing) (GetLine Just))( \(String : *)-> ( \(Maybe : * -> *) -> \(Just : forall (a : *) -> a -> Maybe a) -> \(Nothing : forall (a : *) -> Maybe a) -> \( maybe : forall (a : *) -> Maybe a -> forall (x : *) -> (a -> x) -> x -> x ) -> \(IOF : * -> * -> *) -> \( PutStrLn : forall (r : *) -> forall (s : *) -> String -> s -> IOF r s ) -> \( GetLine : forall (r : *) -> forall (s : *) -> (String -> s) -> IOF r s ) -> \( Return : forall (r : *) -> forall (s : *) -> r -> IOF r s ) -> ( \(IO : * -> *) -> \( MkIO : forall (r : *) -> forall (s : *) -> s -> (s -> IOF r s) -> IO r ) -> ( \(main : forall (r : *) -> IO r) -> main ) -- main ( \(r : *) -> MkIO r (Maybe String) (Nothing String) ( \(m : Maybe String) -> maybe String m (IOF r (Maybe String)) (\(str : String) -> PutStrLn r (Maybe String) str (Nothing String) ) (GetLine r (Maybe String) (Just String)) ) ) ) -- IO ( \(r : *) -> forall (x : *) -> (forall (s : *) -> s -> (s -> IOF r s) -> x) -> x ) -- MkIO ( \(r : *) -> \(s : *) -> \(seed : s) -> \(step : s -> IOF r s) -> \(x : *) -> \(k : forall (s : *) -> s -> (s -> IOF r s) -> x) -> k s seed step ) ) -- Maybe (\(a : *) -> forall (x : *) -> (a -> x) -> x -> x) -- Just ( \(a : *) -> \(va : a) -> \(x : *) -> \(Just : a -> x) -> \(Nothing : x) -> Just va ) -- Nothing ( \(a : *) -> \(x : *) -> \(Just : a -> x) -> \(Nothing : x) -> Nothing ) -- maybe ( \(a : *) -> \(m : forall (x : *) -> (a -> x) -> x-> x) -> m ) -- IOF ( \(r : *) -> \(s : *) -> forall (x : *) -> (String -> s -> x) -> ((String -> s) -> x) -> (r -> x) -> x ) -- PutStrLn ( \(r : *) -> \(s : *) -> \(str : String) -> \(vs : s) -> \(x : *) -> \(PutStrLn : String -> s -> x) -> \(GetLine : (String -> s) -> x) -> \(Return : r -> x) -> PutStrLn str vs ) -- GetLine ( \(r : *) -> \(s : *) -> \(k : String -> s) -> \(x : *) -> \(PutStrLn : String -> s -> x) -> \(GetLine : (String -> s) -> x) -> \(Return : r -> x) -> GetLine k ) -- Return ( \(r : *) -> \(s : *) -> \(vr : r) -> \(x : *) -> \(PutStrLn : String -> s -> x) -> \(GetLine : (String -> s) -> x) -> \(Return : r -> x) -> Return vr ))This compiles to a state machine that we can unfold one step at a time:$ morte < corecursive.mt∀(String : *) → ∀(r : *) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (String → s → x) → ((String → s) → x) → (r → x) → x) → x) → xλ(String : *) → λ(r : *) → λ(x : *) → λ(k : ∀(s : *) → s → (s → ∀(x : *) → (String → s → x) → ((String → s) → x) → (r → x) → x) → x) → k (∀(x : *) → (String → x) → x → x) (λ(x : *) → λ(Just : String → x) → λ(Nothing : x) → Nothing) (λ(m : ∀(x : *) → (String → x) → x → x) → m (∀(x : *) → (String → (∀(x : *) → (String → x) → x → x) → x) → ((String → ∀(x : *) → (String → x) → x → x) → x) → (r → x) → x) (λ(str : String) → λ(x : *) → λ(PutStrLn : String → (∀(x : *) → (String → x) → x → x) → x) → λ(GetLine : (String → ∀(x : *) → (String → x) → x → x) → x) → λ(Return : r → x) → PutStrLn str (λ(x : *) → λ(Just : String → x) → λ(Nothing : x) → Nothing)) (λ(x : *) → λ(PutStrLn : String → (∀(x : *) → (String → x) → x → x) → x) → λ(GetLine : (String → ∀(x : *) → (String → x) → x → x) → x) → λ(Return : r → x) → GetLine (λ(va : String) → λ(x : *) → λ(Just : String → x) → λ(Nothing : x) → Just va))I don't expect you to understand that output other than to know that we can translate the output to any backend that provides functions, and primitive read/write operations.ConclusionIf you would like to use Morte, you can find the library on both Github and Hackage. I also provide a Morte tutorial that you can use to learn more about the library.Morte is dependently typed in theory, but in practice I have not exercised this feature so I don't understand the implications of this. If this turns out to be a mistake then I will downgrade Morte to System Fw, which has higher-kinds and polymorphism, but no dependent types.Additionally, Morte might be usable to transmit code in a secure and typed way in distributed environment or to share code between diverse functional language by providing a common intermediate language. However, both of those scenarios require additional work, such as establishing a shared set of foreign primitives and creating Morte encoders/decoders for each target language.Also, there are additional optimizations which Morte might implement in the future. For example, Morte could use free theorems (equalities you deduce from the types) to simplify some code fragments even further, but Morte currently does not do this.My next goals are:Add a back-end to compile Morte to LLVMAdd a front-end to desugar a medium-level Haskell-like language to MorteOnce those steps are complete then Morte will be a usable intermediate language for writing super-optimizable programs.Also, if you're wondering, the name Morte is a tribute to a talking skull from the game Planescape: Torment, since the Morte library is a "bare-bones" calculus of constructions.LiteratureIf this topic interests you more, you may find the following links helpful, in roughly increasing order of difficulty:Data and Codata - A blog post by Dan Piponi introducing the notions of data and codataChurch encoding - A wikipedia article on church encoding (converting things to lambda expressions)Total Functional Programming - A paper by D. A. Turner on total programming using data and codataRecursive types for free! - An unpublished draft by Philip Wadler about F-algebras and F-coalgebrasUnderstanding F algebras - A blog post by Bartosz MilewskiBeyond Church encoding: Boehm-Berarducci isomorphism of algebraic data types and polymorphic lambda-terms - Oleg Kiselyov's collection of notes on Boehm-Berarducci encoding, a more complete version of Church encodingF-algebras and F-coalgebras - wikipedia articles that are short, sweet, and utterly impenetrable unless you already understand the subject

Categories: Functional Programming

Since we've received multiple bug
reports on this, and there are
many people suffering from it reporting on the cabal
issue, Neil and I decided a more
public announcement was warranted.There is an as-yet undiagnosed bug in cabal which causes some packages to fail
to install. Packages known to be affected are blaze-builder-enumerator,
data-default-instances-old-locale, vector-binary-instances, and
data-default-instances-containers. The output looks something like:Resolving dependencies...
Configuring data-default-instances-old-locale-0.0.1...
Building data-default-instances-old-locale-0.0.1...
Failed to install data-default-instances-old-locale-0.0.1
Build log ( C:\Users\gl67\AppData\Roaming\cabal\logs\data-default-instances-old-locale-0.0.1.log ):
Building data-default-instances-old-locale-0.0.1...
Preprocessing library data-default-instances-old-locale-0.0.1...
[1 of 1] Compiling Data.Default.Instances.OldLocale ( Data\Default\Instances\OldLocale.hs, dist\build\Data\Default\Instances\OldLocale.o )
C:\Users\gl67\repos\MinGHC\7.10.1\ghc-7.10.1\mingw\bin\ar.exe: dist\build\libHSdata-default-instances-old-locale-0.0.1-6jcjjaR25tK4x3nJhHHjFM.a-11336\libHSdata-default-instances-old-locale-0.0.1-6jcjjaR25tK4x3nJhHHjFM.a: No such file or directory
cabal.exe: Error: some packages failed to install:
data-default-instances-old-locale-0.0.1 failed during the building phase. The
exception was:
ExitFailure 1There are two workarounds I know of at this time:You can manually unpack and install the package which seems to work, e.g.:cabal unpack data-default-instances-old-locale-0.0.1
cabal install .\data-default-instances-old-locale-0.0.1Drop down to GHC 7.8.4 until the cabal bug is fixedFor normal users, you can stop reading here. If you're interested in more
details and may be able to help fix it, here's a summary of the research I've
done so far:As far as I can tell, this is a bug in cabal-install, not the Cabal library.
Despite reports to the contrary, it does not seem to be that the
parallelization level (-j option) has any impact. The only thing that seems to
affect the behavior is whether cabal-install unpacks and installs in one
step, or does it in two steps. That's why unpacking and then installing works
around the bug.I've stared at cabal logs on this quite a bit, but don't see a rhyme or reason
to what's happening here. The bug is easily reproducible, so hopefully someone
with more cabal expertise will be able to look at this soon, as this bug has
high severity and has been affecting Windows users for almost two months.

Categories: Functional Programming

> {-# LANGUAGE GeneralizedNewtypeDeriving #-}
> import Control.Applicative (Applicative (..))
> import Data.Monoid (Monoid, (<>))
> newtype Expr = Expr String
> instance Show Expr where show (Expr x) = x
> type Type = String
> type TypeScope = [Type]
> type Scope = String
> type Program = String
> expr1, expr2 :: Expr
> expr1 = Expr "False == 3"
> expr2 = Expr "[1, 'a']"
> program1 :: Program
> program1 = undefined
Introduction
For the last month or, I have been working as a contractor for Luminal. I am helping them implement Fugue, and more specifically Ludwig – a compiler a for statically typed declarative configuration language. This is one of the most interesting projects I have worked on so far – writing a compiler is really fun. While implementing some parts of this compiler, I came across an interesting problem.
In particular, a typeclass instance seemed to adhere to both the Monad and Applicative laws, but with differing behaviour – which felt a bit fishy. I started a discussion on Twitter understand it better, and these are my thoughts on the matter.
The problem
Suppose we’re writing a typechecker. We have to do a number of things:
Parse the program to get a list of user-defined types.
Typecheck each expression.
A bunch of other things, but for the purpose of this post we’re going to leave it at that.
Now, any of these steps could fail, and we’d like to log the reason for failure. Clearly this is a case for something like Either! Let’s define ourselves an appropriate datatype.
> data Check e a
> = Failed e
> | Ok a
> deriving (Eq, Show)
We can write a straightforward Functor instance:
> instance Functor (Check e) where
> fmap _ (Failed e) = Failed e
> fmap f (Ok x) = Ok (f x)
The Monad instance is also very obvious:
> instance Monad (Check e) where
> return x = Ok x
>
> Failed e >>= _ = Failed e
> Ok x >>= f = f x
However, the Applicative instance is not that obvious – we seem to have a choice.
But first, lets take a step back and stub out our compiler a bit more, so that we have some more context. Imagine we have the following types in our compiler:
data Type = ...
data Expr = ...
data Program = ...
type TypeScope = [Type]
And our code looks like this:
> findDefinedTypes1 :: Program -> Check String TypeScope
> findDefinedTypes1 _ = Ok [] -- Assume we can't define types for now.
> typeCheck1 :: TypeScope -> Expr -> Check String Type
> typeCheck1 _ e = Failed $ "Could not typecheck: " ++ show e
> compiler1 :: Check String ()
> compiler1 = do
> scope <- findDefinedTypes1 program1
> typeCheck1 scope expr1 -- False == 3
> typeCheck1 scope expr2 -- [1, 'a']
> return ()
On executing compiler1, we get the following error:
*Main> compiler1
Failed "Could not typecheck: False == 3"
Which is correct, but using a compiler entirely written in this fashion would be annoying. Check, like Either, short-circuits on the first error it encounters. This means we would compile our program, fix one error, compile, fix the next error, compile, and so on.
It would be much nicer if users were to see multiple error messages at once.
Of course, this is not always possible. On one hand, if findDefinedTypes1 throws an error, we cannot possibly call typeCheck1, since we do not have a TypeScope.
On the other hand, if findDefinedTypes1 succeeds, shouldn’t it be possible to collect error messages from both typeCheck1 scope expr1 and typeCheck1 scope expr2?
It turns out this is possible, precisely because the second call to typeCheck1 does not depend on the result of the first call – so we can execute them in parallel, if you will. And that is precisely the difference in expressive power between Monad and Applicative: Monadic >>= provides access to previously computed results, where Applicative <*> does not. Let’s (ab?)use this to our advantage.
The solution?
Cleverly, we put together following instance:
> instance Monoid e => Applicative (Check e) where
> pure x = Ok x
>
> Ok f <*> Ok x = Ok (f x)
> Ok _ <*> Failed e = Failed e
> Failed e <*> Ok _ = Failed e
> Failed e1 <*> Failed e2 = Failed (e1 <> e2)
Using this instance we can effectively collect error messages. We need to change our code a bit to support a collection of error messages, so let’s use [String] instead of String since a list is a Monoid.
> findDefinedTypes2 :: Program -> Check [String] TypeScope
> findDefinedTypes2 _ = Ok [] -- Assume we can't define types for now.
> typeCheck2 :: TypeScope -> Expr -> Check [String] Type
> typeCheck2 _ e = Failed ["Could not typecheck: " ++ show e]
> compiler2 :: Check [String] ()
> compiler2 = do
> scope <- findDefinedTypes2 program1
> typeCheck2 scope expr1 *> typeCheck2 scope expr2
> return ()
Note that *> is the Applicative equivalent of the Monadic >>.
Now, every error is represented by a list of error messages (typically a singleton such as in typeCheck2), and the Applicative <*> combines error messages. If we execute compiler2, we get:
*Main> compiler2
Failed ["Could not typecheck: False == 3",
"Could not typecheck: [1, 'a']"]
Success! But is that all there is to it?
The problem with the solution
The problem is that we have created a situation where <*> is not equal to ap 1. After researching this for a while, it seems that <*> = ap is not a verbatim rule. However, most arguments suggest it should be the case – even the name.
This is important for refactoring, for example. Quite a few Haskell programmers (including myself) would refactor:
do b <- bar
q <- qux
return (Foo b q)
Into:
Foo <$> bar <*> qux
Without putting too much thought in it, just assuming it does the same thing.
In our case, they are clearly similar, but not equal – we would get only one error instead of collecting error messages. One could argue that this is close enough, but when one uses that argument too frequently, you might just end up with something like PHP.
The problem becomes more clear in the following fragment:
checkForCyclicImports modules >>
compileAll modules
Which has completely different behaviour from this fragment:
checkForCyclicImports modules *>
compileAll modules
The latter will get stuck in some sort of infinite recursion, while the former will not. This is not a subtle difference anymore. While the problem is easy to spot here (>> vs. *>), this is not always the case:
forEveryImport_ :: Monad m => Module -> (Import -> m ()) -> m ()
Ever since AMP, it is impossible to tell whether this will do a forM_ or a for_-like traversal without looking at the implementation – this makes making mistakes easy.
The solution to the problem with the solution
As we discussed in the previous section, it should be possible for a programmer to tell exactly how a Monad or Applicative will behave, without having to dig into implementations. Having a structure where <*> and ap behave slightly differently makes this hard.
When a Haskell programmer wants to make a clear distinction between two similar types, the first thing that comes to mind is probably newtypes. This problem is no different.
Let’s introduce a newtype for error-collecting Applicative. Since the Functor instance is exactly the same, we might as well generate it using GeneralizedNewtypeDeriving.
> newtype MonoidCheck e a = MonoidCheck {unMonoidCheck :: Check e a}
> deriving (Functor, Show)
Now, we provide our Applicative instance for MonoidCheck:
> instance Monoid e => Applicative (MonoidCheck e) where
> pure x = MonoidCheck (Ok x)
>
> MonoidCheck l <*> MonoidCheck r = MonoidCheck $ case (l, r) of
> (Ok f , Ok x ) -> Ok (f x)
> (Ok _ , Failed e ) -> Failed e
> (Failed e , Ok _ ) -> Failed e
> (Failed e1, Failed e2) -> Failed (e1 <> e2)
Finally, we avoid writing a Monad instance for MonoidCheck. This approach makes the code cleaner:
This ensures that when people use MonoidCheck, they are forced to use the Applicative combinators, and they cannot accidentally reduce the number of error messages.
For other programmers reading the code, it is very clear whether we are dealing with short-circuiting behaviour or that we are collecting multiple error messages: it is explicit in the types.
Usage and conversion
Our fragment now becomes:
> findDefinedTypes3 :: Program -> Check [String] TypeScope
> findDefinedTypes3 _ = Ok [] -- Assume we can't define types for now.
> typeCheck3 :: TypeScope -> Expr -> MonoidCheck [String] Type
> typeCheck3 _ e = MonoidCheck $ Failed ["Could not typecheck: " ++ show e]
> compiler3 :: Check [String] ()
> compiler3 = do
> scope <- findDefinedTypes3 program1
> unMonoidCheck $ typeCheck3 scope expr1 *> typeCheck3 scope expr2
> return ()
We can see that while it is not more concise, it is definitely more clear: we can see exactly which functions will collect error messages. Furthermore, if we now try to write:
typeCheck3 scope expr1 >> typeCheck3 scope expr2
We will get a compiler warning telling us we should use *> instead.
Explicitly, we now convert between Check and MonoidCheck by simply calling MonoidCheck and unMonoidCheck. We can do this inside other transformers if necessary, using e.g. mapReaderT.
Data.Either.Validation
The MonoidCheck discussed in this blogpost is available as Data.Either.Validation on hackage. The main difference is that instead of using a newtype, the package authors provide a full-blown datatype.
> data Validation e a
> = Failure e
> | Success a
And two straightforward conversion functions:
> validationToEither :: Validation e a -> Either e a
> validationToEither (Failure e) = Left e
> validationToEither (Success x) = Right x
> eitherToValidation :: Either e a -> Validation e a
> eitherToValidation (Left e) = Failure e
> eitherToValidation (Right x) = Success x
This makes constructing values a bit easier:
Failure ["Can't go mucking with a 'void*'"]
Instead of:
MonoidCheck $ Failed ["Can't go mucking with a 'void*'"]
At this point, it shouldn’t surprise you that Validation intentionally does not provide a Monad instance.
Conclusion
This, of course, is all my opinion – there doesn’t seem to be any definite consensus on whether or not ap should be the same as <*>, since differing behaviour occurs in prominent libraries. While the Monad and Applicative laws are relatively well known, there is no canonical law saying that ap = <*>.
Update: there actually is a canonical law that ap should be <*>, and it was right under my nose in the Monad documentation since AMP. Before that, it was mentioned in the Applicative documentation. Thanks to quchen for pointing that out to me!
A key point here is that the AMP actually related the two typeclasses. Before that, arguing that the two classes were in a way “unrelated” was still a (dubious) option, but that is no longer the case.
Furthermore, considering this as a law might reveal opportunities for optimisation 2.
Lastly, I am definitely a fan of implementing these differing behaviours using different types and then converting between them: the fact that types explicitly tell me about the behaviour of code is one of the reasons I like Haskell.
Thanks to Alex Sayers for proofreading and suggestions.
ap is the Monadic sibling of <*> (which explains why <*> is commonly pronounced ap). It can be implemented on top of >>=/return:
> ap :: Monad m => m (a -> b) -> m a -> m b
> ap mf mx = do
> f <- mf
> x <- mx
> return (f x)
↩
Take this with a grain of salt – Currently, GHC does not use any of the Monad laws to perform any optimisation. However, some Monad instances use them in RULES pragmas.↩
2015-05-19T00:00:00Z

Categories: Functional Programming

How Haskellers are seen
The type system and separated IO is an awkward, restricting space suit:
Spending most of their time gazing longingly at the next abstraction to yoink from mathematics:
Looking at anything outside the Haskell language and the type system:
Using unsafePerformIO:
How Haskellers see themselves
No, it’s not a space suit. It’s Iron Man’s suit!
The suit enables him to do impressive feats with confidence and safety:
Look at the immense freedom and power enabled by wearing the suit:
Reality
2015-05-19T00:00:00Z

Categories: Functional Programming