Functional Programming

D more popular than any functional language

The Endeavor - Tue, 07/29/2014 - 13:17

All measures of programming language popularity are indirect and controversial. There’s no consensus on the best way to measure popularity, even if we could gather any data we want. Still, measures of popularity based on data, especially combining several kinds of data, are more useful than personal impressions.

According to the normalized comparison chart on, at the time of writing this post, D is easily more popular than all functional programming languages combined. Here’s a portion of the chart zooming in on D and functional languages.

This hardly seems possible. As I stated up front, all measures of programming language popularity are crude proxies for what we’d really like to know. But still, this says that by at least one measure, D is more popular than any functional language.

The site lets you twiddle the weights assigned to different components of the comparison. And for a wide range of weight choices, D comes out ahead of any functional language.

My preference would be to set the Google Files component should be zero since it’s only looking at web page languages. (The page won’t let you set a component to exactly zero, but you can make it tiny, say 0.001.) Github stats give you a good view into the open source world, but it would be an example of availability bias to assume the whole world of software development looks like Github. Craigslist might be a good window into commercial software development.

Are you serious? "What degree?"

Planet Haskell - Tue, 07/29/2014 - 06:00
So, serious question.An aside (get used to them, by the way), people are never sure if I'm serious.I'm, like, seriously? C'mon!I am always silly (well, oftentimes silly, compared to my dour workmates), but I am always serious in my silliness.Chesterton said it well: the opposite of funny is not serious: the opposite of funny is not funny.Okay, that aside is done. Now onto the topic at hand.Hands up, those who have ever used your education in your jobs. Hands up, those who needed proof of your degree to prove competency as a prerequisite of employment.(sound of crickets.)Thought so.Actually, more of you might raise your hands to me than to most of my colleagues, because why? Because I have close ties to academe, that's why. So there are more than a few of you who are required to have your Master's degree or, those of you who are post-docs, to have your Ph.D. to get that research position or grant that you're working on.The rest of the world?No.Education, in the real world, is a detriment to doing your job.Across the board.A.Cross.The.Board.Do you know how many Ph.D.s we fire? Do you know how many Ph.D.s and matriculated students we turn away, because of their education and their lack of real-world experience?We did a survey: a sure bell-weather for a prospective employee? The amount of their education: the more they have, the more likely they are to be useless on the job.It's the Ph.D-disease: 'Ph.D: "piled high and deep."' People get ed-ju-ma-kated and then they think because they have a sheepskin or that they have a certain GPA at a certain school and know a certain ... educated way of doing things, that they know how to do this-and-that other thing, totally unrelated on the job."I've studied the Martin-Löf's intuitionistic type theory.""Great, how do you connect to our database.""Uh ....""Next."You'll bridle at this, but you know who agrees most strongly with me?Ph.D.sI went to the ICFP2006 specifically looking to roll dependent types into the programming language I was using, in industry, and I could not get the time of day from a single implementer of the type-theory in Twelf, and you know why?"Why would you be interested in this? This is purely theoretical."Uh, huh. As in "not applicable to industry."I reread the paper again, later: "Dependent types make programs easier to understand and more reliable."And why would I want that in my programs in industry, where it mattered.I spent four years at the United States Coast Guard Academy: 10,000 students apply, only 300 are allowed in, only 150 graduate, each year. It cost, at the time, $250,000 to graduate a student from the Academy, making it the most expensive school in the nation.Thank you, taxpayers, for my education.I graduated with a dual major: mathematics and computer science.How much of my education did I use on the job to save 150 lives from a capsized Arctic research exploration vessel (boy, they surely used their education, didn't they! ... to capsize their ship), act as the translator when we boarded Japanese trawlers, provide civil rights education and mediate EEO complaints and ...None. Zip. Zilch.After my stint, how much of my college education did I use on the job.My job was encoding matrices in FORTRAN. How much FORTRAN did I study in college?Zip. Zilch. Nada.How much of the Advanced Calculus II did I use on the job.People, it was frikken matrix manipulation! Something you can (now) look up on wikipedia and pick up in, oh, two hours if you're really slow.Java. C#. Swift. Visual whatever. Spring. SQL. HBase. Angular. JavaScript.All these things (Python, Ruby on Rails) can be taught in college, but they can be taught in high school, and I learned them in school, did I?No, I did not. I learned them on my own, thank you very much.Design patterns, frameworks, data structures. Do educated people know these things?Some of them do. Most people with a 'computer science' degree DO NOT, people.They do not. They woefully do not, as comp sci teachers lament over and over again, and as I, the hiring manager, scratch my head wondering what, precisely, did these kids learn in school, because, insofar as I see, they did not learn abstraction, polymorphism, typing, or data structures.They learned the if-statement.They learned the if-statement that n-dispatches within a for-loop from 1 to n off an array. That's the data structure they know: the array.Maps? Sets?The array.Polymorphism?We got you covered: the if-statement.Functional decomposition?Well, there's always main(String[] args), with a big-ole if-statement.The word on the street is education is a canard at best and a detriment at most, and at worst, it's a project-sinker.That's a shame, because there are educated people who are productive and smart and effective in their field, and can help.How to Solve It: a Modern Approach claims that one billion dollars is wasted on software because it's written in the absence of very simple techniques, such as linear programming.One.Billion.Dollars.Our authors, Michalewicz and Fogel, are off. Way off. I know. By a factor of at least one-hundred.We wasted a billion dollars on a file-indexing system for the FBI. Oh, it had email, too. Government project. Never used because it was never delivered in a useable state. Do you know how many go through that cycle?I don't. I do know know I've seen project after project just ...God. The waste.And I have friends. And they tell me stories.But, you get MITRE in there, or you get a Ph.D. or two in there, and what happens?They study the issue.They study it for six-plus months, and then they write you a nice little white paper that states the obvious search criteria that you knew from day one, but what do you have to say? Where is your ROC-analyses? So your bayesian system that was cranking out results month after month was killed by the bean-counting pointy-heads and they submit a ... working solution that could go into production? Oh, no. They submit a white paper calling for a research grant to allow for a year of surveying and further study of the issue.God.Then they get fired or they move on to more interesting research areas leaving behind us to clean up the mess and get a working system out the door in some serviceable shape that used zero percent of their research.Zero percent.You see, they modeled the situation, but the model doesn't fit the data, which is raw and dirty, so their solution solved the model, not your problem, not even close.Your degree.How much have you used your degree on your job?If you're a researcher, you probably use quite a bit of what you've studied in your research, and you are contributing more to your field of study.If you're not, then you're told this, day one, on your job: "Friend, put those books away, you're never going to use them again."I mean, seriously: did you really bring your college books to your job thinking you'd use them?NEEEERRRRRRD!This, here, is the real world. The ivory tower is for the academics. In the real world, you roll up your sleeves, get to work, and get some results; because if you don't, the door is right over there.You were expecting to use your degree on your job? This is America, people. We don' need no edjumakashun.Now, if this were Soviet Russia, your degree uses you.-----So, silliness, and serious silliness aside.Seriously.You were expecting to use your degree on your job?English major. This is America, we don't talk English, we talk American, so, that's nice that you have that degree.Mathematics major. This is America, we don't do 'maths,' nor trig, nor geometric algebras, nor category theory, how would you use any of that on your job?I was seriously asked that on my interview for a job overseeing a 1.7 petabyte-sized database.I said: 'uh, map-reduce are straight from category theory.'"Yes, but how do you use that on your job?"We both blinked at each other dumbly.The gulf.You don't go to school to get trained to do a job well, ladies and gentlemen.I mean, too many of you do that, and too many others go do school to party some, to sex some, to blaze some, and then get to work after your folks financed your four-plus year bacchanal.College is not a technical training-institute and has nothing to do with acquiring skills or proficiency on repetitive stress disorder, oh, I meant: 'your job.' Your job, almost without exception, can be just as proficiently performed by nearly anyone they drag off the street and put in your chair for eight hours a day. They sit in your chair for a few days, and everyone else won't even know you're gone.Most jobs.My wife beat the pants off her entire payroll division with an excel spreadsheet because they didn't have simple accounting principles and deductive reasoning. Why? Because they were well-regulated at their jobs, proficient at it, in fact, and their job was to make continuous clerical errors because they had absolutely no rigor. Why would they? They weren't paid for rigor. They were paid for doing their jobs, which was: don't make waves.I regularly go into situations where other software engineers (a misnomer, they are more like computer programmers, not engineers) say such-and-so cannot be done in programming language X.Then, I implement a little bit of category theory, in programming language X, do some simple mappings and natural transformations, and, voilà! those 50,000 lines of code that didn't solve the problem but only made things worse? I replace all that with 500 lines of code that actually delivers the solution.Unit tested: all the edge cases.And meeting their requirements, because I've translated the requirements into a declarative DSL on top of their programming language X.Of course they couldn't solve the insurmountable problem in programming language X, not because they were using programming language X (although it helped with the colossal fail being object-disoriented and improvably/mutatatively impure), but because they couldn't think outside the box that 'you can only do this and that' as a software engineer. They were caught in their own domain and can't even see that they had boxed themselves in.Because they were educated that way. Comp Sci 101: this is how you write a program. This is the 'if'-statement. This is the for-loop. If that doesn't work, add more if-statements wrapped by more for-loops, and this statement is perfectly acceptable:x = x + 1Go to town.That's what their education gave them: they went to school to acquire a trade and a proficiency at the if-statement, and gave up their ability to see and to think.And some, many, academics are the most bigoted, most blundering blinders-on fools out there, because they see it their way, and they see their way as the only way, which requires a six-month research grant and further study after that.With co-authorship on the American Mathematical Society journal article.And the uneducated are the worst, most pigheaded fools out there, so sure that the educated have nothing to offer, that they have no dirt under their perfectly manicured fingernails attached silky smooth hands that have never seen an honest-day's work nor, God forbid! a callous, so what do they know, these blowhards, so the uneducated ignore the advances of research into type-theory, category theory, object theory (polymorphism does help at times), any theory, and just code and code and code until they have something that 'looks good.'How to solve this?Start with you.Not with your education, that is: not with your education that tells you who you are.Start with how you can help, and then help.Project one: I saw how fractal dimensionality would solve a spectrum analysis problem. Did I say the words 'fractal' or 'dimensions'? No. I was working with real-programmers. If I asked if I could try this, do you know what they would say? Pfft. Yeah, right. Get back to work, geophf!But, instead, I implemented the algorithm. I sat with a user who had been working on those signals and knew what he needed, iterated through the result a week.Just a week. While I did my job-job full time. I did the fractal spectrum analysis on my own time.My 'thing' floored the software management team. They had seen straight-line approximations before. They thought I was doing actual signal analysis. I mean: with actual signals.They showed my 'thing' to the prospective customer. And got funded.Another project: data transformation and storage, built a system that encompassed six-hundred data elements using a monadic framework to handle the semideterminism. That was an unsolvable problem in Java. I used Java.Java with my monadic framework, yes, but Java, to solve the problem.Third project: calculating a 'sunset date' over a data vector of dimension five over a time continuum. Hm: continuum.Unsolvable problem. Three teams of software developers tackled it over six months. Nobody could get close to the solution.Continuum.I used a comonadic framework.Took me, along with a tester who was the SME on the problem, and a front-end developer to get the presentation layer just right, about a month, and we solved that baby and put it to bed.Unit tested. All edge cases.Did I tell them I used a comonadic framework?Nah, they tripped over themselves when they saw the word 'tuple.'No joke, my functional programming language friends: they, 'software engineers,' were afraid of the word 'tuple.'So I explained as much as anyone wanted to know when anyone asked. I wrote design documents, showing unit test case results, and they left me alone. They knew I knew what I was doing, and I got them their results. That's what they needed.They didn't need my degree.They didn't need to know I used predicate logic to optimize SQL queries that took four fucking hours to run to a query that took forty-five seconds.They didn't need to know I refactored using type theory, that A + B are disjoint types and A * B are type instances and A ^ B are function applications so I could look at a program, construct a mathematical model of it and get rid of 90% of it because it was all redundantly-duplicated code inside if-clauses, so I simply extracted (2A + 2B ... ad nauseam) to 2(A + B ...) and then used a continuation, for God's sake, with 'in the middle of the procedure' code, or, heaven help me, parameterization over a simple functional decomposition exercise to reduce a nightmare of copy-and-paste to something that had a story to tell that made sense.How do you connect to a database?Do you need a college degree for that?Kids with college degrees don't know the answer to that simple interview question.And they don't know the Spring framework, making 'how to connect to a database' a stupid-superfluous question.They don't know what unit tests give them. They don't know what unit tests don't give them. Because they, college kids and crusty old 'software engineers,' don't write them, so they have no consistency nor security in their code: they can't change anything here because it might break something over there, and they have no unit tests as a safety net to provide that feedback to them, and since they are programming in language X, a nice, strict, object-oriented programming language, they have no programs-as-proofs to know that what they are writing is at all good or right or anything.A college degree gives you not that. A not college degree gives you not that.A college degree is supposed to what, then?It's suppose to open your mind to the possibility of a larger world, and it's supposed to give you the tools to think, and to inquire, so that you can discern."This, not that. That, and then this. This causes that. That is a consequence of this. I choose this for these reasons. These reasons are sound because of these premises. This reason here. Hm. I wonder about that one. It seems unsound. No: unfamiliar. Is it sound or unsound? Let me find out and know why."English. Mathematics. Art. Literature. Music. Philosophy. All of these things are the humanities. The sciences and the above. Law. Physics. All these lead one to the tools of inquiry.In school, you are supposed to have been given tools to reason.Then, you're sent back out into the world.And then you are supposed to reason.And with your reason, you make the world a better place, or a worse place.These things at school, these are the humanities, and they are there to make you human.Not good at your job, not so you can 'use' your degree as a skill at work, but to make you human.And, as human, are you good at your job?Yes.And, as human, do you make your world a place such that others are good and happy at their jobs?Yes.The end of being human is not to be skilled, nor proficient ... 'good' at your job.But it's an accident of it, a happy accident.The 'end' of being human?Well: that's your inquiry.That's what school, that's what everything, is for: for you to answer the unanswered question.Your way.And, if you accept that, and are fully realized as a human being, then your way is the best way in the world, and your way has the ability to change lives. First, your own, then others. Perhaps your coworkers.Perhaps hundreds of others.Perhaps thousands.Perhaps you will change the entire world.But you won't know that until you take that first step of inquiry.Then the next.Then the next.And you look back, and you see how far you've come, and ... wow.Just wow.That's what school is for. Not for your job.For you.

Reading files from the proc filesystem

Planet Haskell - Tue, 07/29/2014 - 06:00
I was stumped by this one myself for a bit today, so I thought writing it up in a blog post would be a good way to make sure (1) I don't forget this little fact, and (2) hopefully the next person doesn't need to puzzle over this as long as I did. Let's say you want to read the contents of a file in the proc filesystem, such as /proc/uptime. There are many ways to do that in Haskell. Let's ignore any streaming data framework for the moment, and instead focus on just the "string-like" types: String and strict/lazy ByteString/Text. Here's a little program that tries all of them out:import qualified Data.ByteString as S import qualified Data.ByteString.Lazy as L import qualified Data.Text.IO as T import qualified Data.Text.Lazy.IO as TL test :: Show a => String -> (FilePath -> IO a) -> IO () test name reader = do contents <- reader "/proc/uptime" putStrLn $ name ++ ": " ++ show contents main :: IO () main = do test "String " readFile test "strict ByteString" S.readFile test "lazy ByteString " L.readFile test "strict Text " T.readFile test "lazy Text " TL.readFileGiven that the uptime file is just simple ASCII data, you'd probably assume (like I did) that all of these will produce the same result. In fact, that's not the case. On my system, the results are:String : "60740.70 136144.86\n" strict ByteString: "" lazy ByteString : "60740.70 136144.86\n" strict Text : "60740.70 136144.86\n" lazy Text : "60740.70 136144.86\n"Strict ByteString reading is returning an empty value! Why is this happening? It's actually quite easy to see once you throw in two new pieces of information. First, let's look at the implementation of Data.ByteString.readFile:readFile :: FilePath -> IO ByteString readFile f = bracket (openBinaryFile f ReadMode) hClose (\h -> hFileSize h >>= hGet h . fromIntegral)Notice how we allocate a buffer exactly the right size to read in the entire contents of the file. We don't do this with any of the file reading functions. For the lazy variants, we don't want to read the entire file into memory at once. And for strict Text, knowing the size of the file doesn't tell us the size of the buffer we need to allocate, due to variable length encoding. So this nifty optimization only applies to strict ByteStrings.Now piece of data number two:$ ls -l /proc/uptime -r--r--r-- 1 root root 0 Jul 27 13:56 /proc/uptimeHuh, the file is empty! As is well documented, virtually every file in the proc filesystem is listed as empty, and the contents are generated on demand by the kernel.So how do you read the file contents into a strict ByteString? There are actually plenty of approaches that work. In my case, I ended up just writing a helper function using conduit: localReadFile fp = IO.withBinaryFile fp IO.ReadMode $ \h -> sourceHandle h $$ foldCBut probably the simplest thing to do is to just convert a lazy ByteString into a strict ByteString, e.g. fmap L.toStrict . L.readFile.


Planet Haskell - Tue, 07/29/2014 - 06:00

Converting Make to Shake

Planet Haskell - Tue, 07/29/2014 - 06:00
Summary: I have converted over 10,000 lines from Make to Shake. Here are some tips I learnt along the way.Make is the de facto build system for large projects - if no one made an active choice, your project is probably using Make. The Shake build system can be a better alternative, but how should you convert? The following tips are based on my experience converting a 10,000 line Make system to Shake.Shake can do whatever Make canShake is more powerful than Make, so if Make could do something, Shake probably can too. As a first approximation, the Make snippet:output: input1 input2 shell command to runBecomes:"output" *> \out -> do need ["input1","input2"] cmd Shell "shell command to run"In addition:Variables in Make usually map to normal Haskell variables.Definitions of rules and dependencies use the functions from Development.Shake. For example, .PHONY maps to the phony function.Filepath manipulation uses the functions from Development.Shake.FilePath.Dynamically generated include files can be handled with needMakefileDependencies from Development.Shake.Util.Preserve the file/directory structureThe existing Make system will generate object files with particular names in particular places. Often these locations aren't what you would pick if you wrote the build system afresh. However, resist the temptation to "clean up" these pieces during the conversion. Treat the file locations as a specification, which lets you focus on the conversion to Shake without simultaneously redesigning a large and complex build system.Treat the Makefile as a black boxOften the existing Makefile will be hard to understand, and sometimes won't be worth reading at all. The most important information in the Makefile is what commands it runs, which can be determined by make clean && make -j1 > log.txt, which captures a complete list of the commands run. From the commands it is usually relatively easy to determine the inputs and outputs, from which you can write the Shake rules. However, the Makefile can be useful to figure out which commands to group into a single rule, and how to generalise rules to cover multiple files.Split the metadata from the logicOften the Makefiles combine metadata (these object files go into this executable) with logic (use gcc -O2 to build all executables). Shake is great for writing build logic, but metadata is often better placed in separate files (the Haskell syntax can be a little heavy). You can use the full power of Haskell to store whatever metadata you require, and addOracle from Shake can introduce granular dependencies on the information. The module Development.Shake.Config provides some helper functions that might serve as a suitable base.To bootstrap the Shake system, often the metadata can be extracted from the existing Makefiles. You can write a temporary script to parse the Makefile and extract whatever you consider the metadata, clean it up, and write it to new configuration files. Initially the config files are generated, but once you delete the Make original, they become source files.Focus on a single platform/configurationOften a build system will be cross-platform (Linux/Mac/Windows), build multiple targets (binaries/distribution package/documentation) and build multiple configurations (release/debug/profile). To start the conversion, focus only on the most heavily developed platform/configuration - if the migration is successful, abstracting over the differences is far easier in Shake than Make. You may wish to start with a simple target to try out Shake (e.g. documentation), but after that work on the target developers use every day, so that the developers can make use of the improvements sooner, motivating the migration.Convert bottom upShake demands that it built all the dependencies (it checks the modification time is equal to what it remembered), in contrast Make only requires that targets are newer than their dependencies. As a result, you should start converting the leaves of the build system to Shake, and work upwards. Provided you use the same file/directory structure, you can then build what you have defined with Shake, then finish the build with Make, checking the result still works as expected.Run Make and Shake in parallelOne you have migrated enough of the build system to be useful (the usual targets in the most common configuration), you should encourage some developers to try Shake instead of Make. These developers will find things that don't work properly, hidden features in the Make system that no one knew about etc. Expect to fix problems and iterate several times.Hopefully the Shake system will be faster and more robust. Once these advantages have encouraged all the main developers to make the switch, you should delete/disable the Make system and expect it to bitrot quickly.Refactor individual rulesAs you are converting rules from Make to Shake you can translate them directly and refactor later, or convert straight into more idiomatic Shake. As an example, you might start with:cmd Shell "ls >" outThe argument Shell tells Shake to use the system shell, meaning that > redirect works. Later on you may wish to switch to:Stdout result <- cmd "ls"writeFile' out resultNow you are invoking the ls command directly, capturing the output using Shake. Sometime later you may switch to:getDirectoryFiles "." ["*"]Which is the Shake tracked way of getting a list of files. Similarly, calling sed or for through Shell should probably be gradually converted to Shake/Haskell operations.Refactor the wholeOnce you have converted the whole build system, and disabled the original Make system, you may wish to refactor the build system - putting files in more appropriate places, rethinking file dependencies etc. In truth, I've never got round to this step, and I would be surprised if many people did. However, as the build system grows, hopefully the new bits with sensible decisions will gradually begin to outnumber the old bits with questionable design.Ask if you get stuckBuild systems (even in Shake) are complex entities, with intricate coordination between files, which mostly run untyped external commands with many platform/version differences. As a result, build systems are often complex to write.If you have a problem using Shake, just ask. If you can boil down the problem to something fairly standalone, ask on StackOverflow with the tag shake-build-system. If you are looking for more general advice, ask on the mailing list. If you succeed, write a blog post and tweet me. 2014-07-26T20:49:00Z Neil Mitchell

New theme!

Planet Haskell - Tue, 07/29/2014 - 06:00
Hello loyal readers: Inside 206-105 has a new theme! I’m retiring Manifest, which was a pretty nice theme but (1) the text size was too small and (2) I decided I didn’t really like the fonts, I’ve reskinned my blog with a theme based on Brent Jackson’s Ashley, but ported to work on WordPress. I […]

Write web services around databases with 0 boilerplate: announcing servant 0.1

Planet Haskell - Tue, 07/29/2014 - 06:00
At Zalora, we write a lot of web services and web applications in general. We use scotty a lot. And after having written a couple of web-services, despite some small handy abstractions we came up with, it really felt like we could achieve the same thing in a very concise and minimalist manner, by letting the compiler do more work for us so that we would just have to write wrapper for our SQL queries in haskell. All we had to do was to take advantage of a couple of extensions that landed in GHC in the past few years and propagate the right bits of information at the type-level. And this is what we’ve done. The result is servant (github, hackage), which lets you declare resources, which just represent a bunch of operations (think endpoints) that operate on some type. So you could for example declare a users resource that supports adding, deleting and listing users in the following way: mkResource "users" ctx exceptions & addWith addUser & deleteWith deleteUser & listAllWith listUsers where: ctx is just there to specify how to get our hand on our database connection for example, think of it as a withConnection function exceptions is a bunch of functions that catch exceptions of various types and turns them into an error type of yours addUser, deleteUser and listUsers are functions that run the corresponding SQL queries using the connection provided by ctx And now you can turn this into a JSON-based webservice by simply applying Servant.Scotty.runResource to this simple definition. Then, provided you have written a handful of instances as required by each operation, you’ll have a small REST-y webservice with 3 endpoints that do what you expect. The more interesting aspect of servant however is that the add, delete and listall operations just happen to be some prelude operations provided by the servant packages. You can define your own in just the same way the standard ones are defined. The same applies to the automatic JSON-based request body/response body handling or to the web-framework backend used (we only have a scotty one for now but you could write your own for any other framework by drawing some inspiration from the scotty one). You can extend servant in basically every possible direction. If you want to learn more about servant, how it can be used and how it works, you may be interested to check out the README from github which contains some documentation links, that I’ll reproduce here: Getting started with servant, which guides you through building the simple webservice we’ve seen above. There’s an example in the repository with the code covered in this getting started guide, with a cabal file and everything. Tutorial, which dives much more into servant’s packages and modules and its inner workings, with some illustrations of the extensibility of servant. Haddocks for all servant packages We would of course be glad to hear any kind of feedback, so please do not hesitate to shoot us an email with comments, and report any issue you may encounter on our github. Posted on July 26, 2014 haskellwebrestwebservices 2014-07-26T00:00:00Z

Senior Haskell Developer at Plow Technologies (Full-time)

Planet Haskell - Tue, 07/29/2014 - 06:00
Plow Technologies is looking for an experienced Haskell developer who can lead software design. Deep understanding of the Haskell programming language is preferred. This person would be expected to work at all levels of our platform to optimize performance, reliability, and maintainability of our code base. We want the kind of programmer who makes everyone else better by: (1) designing application programming interfaces and libraries that speed up time of development; and (2) teaching others through direct interaction. The kind of skills that are desired for this position are rare, so remote work would definitely be an option. However, some direct interaction (including travel) should be expected. Get information on how to apply for this position. 2014-07-24T20:07:52Z

Applicative vs Monadic build systems

Planet Haskell - Tue, 07/29/2014 - 06:00
Summary: Shake is a monadic build system, and monadic build systems are more powerful than applicative ones.Several people have wondered if the dependencies in the Shake build system are monadic, and if Make dependencies are applicative. In this post I'll try and figure out what that means, and show that the claim is somewhat true.Gergo recently wrote a good primer on the concepts of Applicative, Monads and Arrows (it is worth reading the first half if you are unfamiliar with monad or applicative). Using a similar idea, we can model a simple build system as a set of rules:rules :: [(FilePath, Action String)]rules = [("a+b", do a <- need "a"; b <- need "b"; return (a ++ b)) ,("a" , return "Hello ") ,("b" , return "World") ]Each rule is on a separate line, containing a pair of the file the rule produces (e.g. a for the second rule) and the action that produces the files contents (e.g. return "Hello"). I've used need to allow a rule to use the contents of another file, so the rule for a+b depends on the files a and b, then concatenates their contents. We can run these rules to produce all the files. We've written these rules assuming Action is a Monad, using the do notation for monads. However, for the above build system, we can restrict ourselves to Applicative functions:rules = [("a+b", (++) <$> need "a" <*> need "b") ,("a" , pure "Hello ") ,("b" , pure "World") ]If Action is applicative but not monadic then we can statically (without running any code operating on file contents) produce a dependency graph. If Action is monadic we can't generate a graph upfront, but there are some build systems that cannot be expressed applicatively. In particular, using a monad we can write a "dereferencing" build system:rules = [("!a", do a <- need "a"; need a) ,("a" , pure "b") ,("b" , pure "Goodbye") ]To build the file !a we first require the file a (which produces the contents b), then we require the file b (which produces the contents Goodbye). Note that the first rule has changed b the content into b the file name. In general, to move information from the file content to a file name, requires a monad. Alternatively stated, a monad lets you chose future dependencies based on the results of previous dependencies.One realistic example (from the original Shake paper), is building a .tar file from the list of files contained in a file. Using Shake we can write the Action:contents <- readFileLines "list.txt"need contentscmd "tar -cf" [out] contentsThe only build systems that I'm aware of that are monadic are redo, SCons and Shake-inspired build systems (including Shake itself, Jenga in OCaml, and several Haskell alternatives).While it is the case that Shake is monadic, and that monadic build systems are more powerful than applicative ones, it is not the case that Make is applicative. In fact, almost no build systems are purely applicative. Looking at the build shootout, every build system tested can implement the !a example (provided the file a is not a build product), despite several systems being based on applicative dependencies.Looking at Make specifically, it's clear that the output: input1 input2 formulation of dependencies is applicative in nature. However, there are at least two aspects I'm aware of that increase the power of Make:Using $(shell cat list.txt) I can splice the contents of list.txt into the Makefile, reading the contents of list.txt before the dependencies are parsed.Using -include file.d I can include additional rules that are themselves produced by the build system.It seems every "applicative" build system contains some mechanism for extending its power. I believe some are strictly less powerful than monadic systems, while others may turn out to be an encoding of monadic rules. However, I think that an explicitly monadic definition provides a clearer foundation. 2014-07-23T19:11:00Z Neil Mitchell

When do n and 2n have the same digits?

Planet Haskell - Tue, 07/29/2014 - 06:00
[This article was published last month on the math.stackexchange blog, which seems to have died young, despite many earnest-sounding promises beforehand from people who claimed they would contribute material. I am repatriating it here.] A recent question on math.stackexchange asks for the smallest positive integer for which the number has the same decimal digits in some other order. Math geeks may immediately realize that has this property, because it is the first 6 digits of the decimal expansion of , and the cyclic behavior of the decimal expansion of is well-known. But is this the minimal solution? It is not. Brute-force enumeration of the solutions quickly reveals that there are 12 solutions of 6 digits each, all permutations of , and that larger solutions, such as 1025874 and 1257489 seem to follow a similar pattern. What is happening here? Stuck in Dallas-Fort Worth airport one weekend, I did some work on the problem, and although I wasn't able to solve it completely, I made significant progress. I found a method that allows one to hand-calculate that there is no solution with fewer than six digits, and to enumerate all the solutions with 6 digits, including the minimal one. I found an explanation for the surprising behavior that solutions tend to be permutations of one another. The short form of the explanation is that there are fairly strict conditions on which sets of digits can appear in a solution of the problem. But once the set of digits is chosen, the conditions on that order of the digits in the solution are fairly lax. So one typically sees, not only in base 10 but in other bases, that the solutions to this problem fall into a few classes that are all permutations of one another; this is exactly what happens in base 10 where all the 6-digit solutions are permutations of . As the number of digits is allowed to increase, the strict first set of conditions relaxes a little, and other digit groups appear as solutions. Notation The property of interest, , is that the numbers and have exactly the same base- digits. We would like to find numbers having property for various , and we are most interested in . Suppose is an -digit numeral having property ; let the (base-) digits of be and similarly the digits of are . The reader is encouraged to keep in mind the simple example of which we will bring up from time to time. Since the digits of and are the same, in a different order, we may say that for some permutation . In general might have more than one cycle, but we will suppose that is a single cycle. All the following discussion of will apply to the individual cycles of in the case that is a product of two or more cycles. For our example of , we have in cycle notation. We won't need to worry about the details of , except to note that completely exhaust the indices , and that because is an -cycle. Conditions on the set of digits in a solution For each we have $$a_{P(i)} = b_{i} \equiv 2a_{i} + c_i\pmod R $$ where the ‘carry bit’ is either 0 or 1 and depends on whether there was a carry when doubling . (When we are in the rightmost position and there is never a carry, so .) We can then write: $$\begin{align} a_{P(P(i))} &= 2a_{P(i)} + c_{P(i)} \\ &= 2(2a_{i} + c_i) + c_{P(i)} &&= 4a_i + 2c_i + c_{P(i)}\\ a_{P(P(P(i)))} &= 2(4a_i + 2c_i + c_{P(P(i)})) + c_{P(i)} &&= 8a_i + 4c_i + 2c_{P(i)} + c_{P(P(i))}\\ &&&\vdots\\ a_{P^n(i)} &&&= 2^na_i + v \end{align} $$ all equations taken . But since is an -cycle, , so we have $$a_i \equiv 2^na_i + v\pmod R$$ or equivalently $$\big(2^n-1\big)a_i + v \equiv 0\pmod R\tag{$\star$}$$ where depends only on the values of the carry bits —the are precisely the binary digits of . Specifying a particular value of and that satisfy this equation completely determines all the . For example, is a solution when because , and this solution allows us to compute $$\def\db#1{\color{darkblue}{#1}}\begin{align} a_0&&&=2\\ a_{P(0)} &= 2a_0 &+ \db0 &= 4\\ a_{P^2(0)} &= 2a_{P(0)} &+ \db0 &= 0 \\ a_{P^3(0)} &= 2a_{P^2(0)} &+ \db1 &= 1\\ \hline a_{P^4(0)} &= 2a_{P^3(0)} &+ \db0 &= 2\\ \end{align}$$ where the carry bits are visible in the third column, and all the sums are taken . Note that as promised. This derivation of the entire set of from a single one plus a choice of is crucial, so let's see one more example. Let's consider . Then we want to choose and so that where . One possible solution is . Then we can derive the other as follows: $$\begin{align} a_0&&&=5\\ a_{P(0)} &= 2a_0 &+ \db1 &= 1\\ a_{P^2(0)} &= 2a_{P(0)} &+ \db0 &= 2 \\\hline a_{P^3(0)} &= 2a_{P^2(0)} &+ \db1 &= 5\\ \end{align}$$ And again we have as required. Since the bits of are used cyclically, not every pair of will yield a different solution. Rotating the bits of and pairing them with different choices of will yield the same cycle of digits starting from a different place. In the first example above, we had . If we were to take (which also solves ) we would get the same cycle of values of the but starting from instead of from , and similarly if we take or . So we can narrow down the solution set of by considering only the so-called bracelets of rather than all possible values. Two values of are considered equivalent as bracelets if one is a rotation of the other. When a set of -values are equivalent as bracelets, we need only consider one of them; the others will give the same cyclic sequence of digits, but starting in a different place. For , for example, the bracelets are and ; the sequences and being equivalent to , and so on. Example Let us take , so we want to find 3-digit numerals with property . According to we need where . There are 9 possible values for ; for each one there is at most one possible value of that makes the sum zero: $$\pi \approx 3 $$ $$\begin{array}{rrr} a_i & 7a_i & v \\ \hline 0 & 0 & 0 \\ 1 & 7 & 2 \\ 2 & 14 & 4 \\ 3 & 21 & 6 \\ 4 & 28 & \\ 5 & 35 & 1 \\ 6 & 42 & 3 \\ 7 & 49 & 5 \\ 8 & 56 & 7 \\ \end{array} $$ (For there is no solution.) We may disregard the non-bracelet values of , as these will give us solutions that are the same as those given by bracelet values of . The bracelets are: $$\begin{array}{rl} 000 & 0 \\ 001 & 1 \\ 011 & 3 \\ 111 & 7 \end{array}$$ so we may disregard the solutions exacpt when . Calculating the digit sequences from these four values of and the corresponding we find: $$\begin{array}{ccl} a_0 & v & \text{digits} \\ \hline 0 & 0 & 000 \\ 5 & 1 & 512 \\ 6 & 3 & 637 \\ 8 & 7 & 888 \ \end{array} $$ (In the second line, for example, we have , so and .) Any number of three digits, for which contains exactly the same three digits, in base 9, must therefore consist of exactly the digits or . A warning All the foregoing assumes that the permutation is a single cycle. In general, it may not be. Suppose we did an analysis like that above for and found that there was no possible digit set, other than the trivial set 00000, that satisfied the governing equation . This would not completely rule out a base-10 solution with 5 digits, because the analysis only rules out a cyclic set of digits. There could still be a solution where was a product of a and a -cycle, or a product of still smaller cycles. Something like this occurs, for example, in the case. Solving the governing equation yields only four possible digit cycles, namely , and . But there are several additional solutions: and . These correspond to permutations with more than one cycle. In the case of , for example, exchanges the and the , and leaves the and the fixed. For this reason we cannot rule out the possibility of an -digit solution without first considering all smaller . The Large Equals Odd rule When is even there is a simple condition we can use to rule out certain sets of digits from being single-cycle solutions. Recall that and . Let us agree that a digit is large if and small otherwise. That is, is large if, upon doubling, it causes a carry into the next column to the left. Since , where the are carry bits, we see that, except for , the digit is odd precisely when there is a carry from the next column to the right, which occurs precisely when is large. Thus the number of odd digits among is equal to the number of large digits among . This leaves the digits and uncounted. But is never odd, since there is never a carry in the rightmost position, and is always small (since otherwise would have digits, which is not allowed). So the number of large digits in is exactly equal to the number of odd digits in . And since and have exactly the same digits, the number of large digits in is equal to the number of odd digits in . Observe that this is the case for our running example : there is one odd digit and one large digit (the 4). When is odd the analogous condition is somewhat more complicated, but since the main case of interest is , we have the useful rule that: For even, the number of odd digits in any solution is equal to the number of large digits in . Conditions on the order of digits in a solution We have determined, using the above method, that the digits might form a base-9 numeral with property . Now we would like to arrange them into a base-9 numeral that actually does have that property. Again let us write and , with . Note that if , then (if there was a carry from the next column to the right) or (if there was no carry), but since is impossible, we must have and therefore must be small, since there is no carry into position . But since is also one of , and it cannot also be , it must be . This shows that the 1, unless it appears in the rightmost position, must be to the left of the ; it cannot be to the left of the . Similarly, if then , because is impossible, so the must be to the left of a large digit, which must be the . Similar reasoning produces no constraint on the position of the ; it could be to the left of a small digit (in which case it doubles to ) or a large digit (in which case it doubles to ). We can summarize these findings as follows: $$\begin{array}{cl} \text{digit} & \text{to the left of} \\ \hline 1 & 1, 2, \text{end} \\ 2 & 5 \\ 5 & 1,2,5,\text{end} \end{array}$$ Here “end” means that the indicated digit could be the rightmost. Furthermore, the left digit of must be small (or else there would be a carry in the leftmost place and would have 4 digits instead of 3) so it must be either 1 or 2. It is not hard to see from this table that the digits must be in the order or , and indeed, both of those numbers have the required property: , and . This was a simple example, but in more complicated cases it is helpful to draw the order constraints as a graph. Suppose we draw a graph with one vertex for each digit, and one additional vertex to represent the end of the numeral. The graph has an edge from vertex to whenever can appear to the left of . Then the graph drawn for the table above looks like this: A 3-digit numeral with property corresponds to a path in this graph that starts at one of the nonzero small digits (marked in blue), ends at the red node marked ‘end’, and visits each node exactly once. Such a path is called hamiltonian. Obviously, self-loops never occur in a hamiltonian path, so we will omit them from future diagrams. Now we will consider the digit set , again base 9. An analysis similar to the foregoing allows us to construct the following graph: Here it is immediately clear that the only hamiltonian path is , and indeed, . In general there might be multiple instances of a digit, and so multiple nodes labeled with that digit. Analysis of the case produces a graph with no legal start nodes and so no solutions, unless leading zeroes are allowed, in which case is a perfectly valid solution. Analysis of the case produces a graph with no path to the end node and so no solutions. These two trivial patterns appear for all and all , and we will ignore them from now on. Returning to our ongoing example, in base 8, we see that and must double to and , so must be to the left of small digits, but and can double to either or and so could be to the left of anything. Here the constraints are so lax that the graph doesn't help us narrow them down much: Observing that the only arrow into the 4 is from 0, so that the 4 must follow the 0, and that the entire number must begin with 1 or 2, we can enumerate the solutions: 1042 1204 2041 2104 If leading zeroes are allowed we have also: 0412 0421 All of these are solutions in base 8. The case of Now we turn to our main problem, solutions in base 10. To find all the solutions of length 6 requires an enumeration of smaller solutions, which, if they existed, might be concatenated into a solution of length 6. This is because our analysis of the digit sets that can appear in a solution assumes that the digits are permuted cyclically; that is, the permutations that we considered had only one cycle each. There are no smaller solutions, but to prove that the length 6 solutions are minimal, we must analyze the cases for smaller and rule them out. We now produce a complete analysis of the base 10 case with and . For there is only the trivial solution of , which we disregard. (The question asked for a positive number anyway.) For , we want to find solutions of where is a two-bit bracelet number, one of or . Tabulating the values of and that solve this equation we get: $$\begin{array}{ccc} v& a_i \\ \hline 0 & 0 \\ 1& 3 \\ 3& 9 \\ \end{array}$$ We can disregard the and solutions because the former yields the trivial solution and the latter yields the nonsolution . So the only possibility we need to investigate further is , which corresponds to the digit sequence : Doubling gives us and doubling , plus a carry, gives us again. But when we tabulate of which digits must be left of which informs us that there is no solution with just and , because the graph we get, once self-loops are eliminated, looks like this: which obviously has no hamiltonian path. Thus there is no solution for . For we need to solve the equation where is a bracelet number in , specifically one of or . Since and are relatively prime, for each there is a single that solves the equation. Tabulating the possible values of as before, and this time omitting rows with no solution, we have: $$\begin{array}{rrl} v & a_i & \text{digits}\\ \hline 0& 0 & 000\\ 1& 7 & 748 \\ 3& 1 & 125\\ 7&9 & 999\\ \end{array}$$ The digit sequences and yield trivial solutions or nonsolutions as usual, and we will omit them in the future. The other two lines suggest the digit sets and , both of which fails the “odd equals large” rule. This analysis rules out the possibility of a digit set with , but it does not completely rule out a 3-digit solution, since one could be obtained by concatenating a one-digit and a two-digit solution, or three one-digit solutions. However, we know by now that no one- or two-digit solutions exist. Therefore there are no 3-digit solutions in base 10. For the governing equation is where is a 4-bit bracelet number, one of . This is a little more complicated because . Tabulating the possible digit sets, we get: $$\begin{array}{crrl} a_i & 15a_i& v & \text{digits}\\ \hline 0 & 0 & 0 & 0000\\ 1 & 5 & 5 & 1250\\ 1 & 5 & 15 & 1375\\ 2 & 0 & 0 & 2486\\ 3 & 5 & 5 & 3749\\ 3 & 5 & 15 & 3751\\ 4 & 0 & 0 & 4862\\ 5 & 5 & 5 & 5012\\ 5 & 5 & 5 & 5137\\ 6 & 0 & 0 & 6248\\ 7 & 5 & 5 & 7493\\ 7 & 5 & 5 & 7513\\ 8 & 0 & 0 & 8624 \\ 9 & 5 & 5 & 9874\\ 9 & 5 & 15 & 9999 \\ \end{array}$$ where the second column has been reduced mod . Note that even restricting to bracelet numbers the table still contains duplicate digit sequences; the 15 entries on the right contain only the six basic sequences , and . Of these, only and obey the odd equals large criterion, and we will disregard and as usual, leaving only . We construct the corresponding graph for this digit set as follows: must double to , not , so must be left of a large number or . Similarly must be left of or . must also double to , so must be left of . Finally, must double to , so must be left of or the end of the numeral. The corresponding graph is: which evidently has no hamiltonian path: whichever of 3 or 4 we start at, we cannot visit the other without passing through 7, and then we cannot reach the end node without passing through 7 a second time. So there is no solution with and . We leave this case as an exercise. There are 8 solutions to the governing equation, all of which are ruled out by the odd equals large rule. For the possible solutions are given by the governing equation where is a 6-bit bracelet number, one of . Tabulating the possible digit sets, we get: $$\begin{array}{crrl} v & a_i & \text{digits}\\ \hline 0 & 0 & 000000\\ 1 & 3 & 362486 \\ 3 & 9 & 986249 \\ 5 & 5 & 500012 \\ 7 & 1 & 124875 \\ 9 & 7 & 748748 \\ 11 & 3 & 362501 \\ 13 & 9 & 986374 \\ 15 & 5 & 500137 \\ 21 & 3 & 363636 \\ 23 & 9 & 989899 \\ 27 & 1 & 125125 \\ 31 & 3 & 363751 \\ 63 & 9 & 999999 \\ \end{array}$$ After ignoring and as usual, the large equals odd rule allows us to ignore all the other sequences except and . The latter fails for the same reason that did when . But , the lone survivor, gives us a complicated derived graph containing many hamiltonian paths, every one of which is a solution to the problem: It is not hard to pick out from this graph the minimal solution , for which , and also our old friend for which . We see here the reason why all the small numbers with property contain the digits . The constraints on which digits can appear in a solution are quite strict, and rule out all other sequences of six digits and all shorter sequences. But once a set of digits passes these stringent conditions, the constraints on it are much looser, because is only required to have the digits of in some order, and there are many possible orders, many of which will satisfy the rather loose conditions involving the distribution of the carry bits. This graph is typical: it has a set of small nodes and a set of large nodes, and each node is connected to either all the small nodes or all the large nodes, so that the graph has many edges, and, as in this case, a largish clique of small nodes and a largish clique of large nodes, and as a result many hamiltonian paths. Onward This analysis is tedious but is simple enough to perform by hand in under an hour. As increases further, enumerating the solutions of the governing equation becomes very time-consuming. I wrote a simple computer program to perform the analysis for given and , and to emit the possible digit sets that satisfied the large equals odd criterion. I had wondered if every base-10 solution contained equal numbers of the digits and . This is the case for (where the only admissible digit set is ), for (where the only admissible sets are and ), and for (where the only admissible sets are and ). But when we reach the increasing number of bracelets has loosened up the requirements a little and there are 5 admissible digit sets. I picked two of the promising-seeming ones and quickly found by hand the solutions and , both of which wreck any theory that the digits must all appear the same number of times. Acknowledgments Thanks to Karl Kronenfeld for corrections and many helpful suggestions.

A few new papers

Planet Haskell - Tue, 07/29/2014 - 06:00
I’ve just updated my web page with links to some new papers that are now available: “Homotopical Patch Theory” by Carlo Angiuli, Ed Morehouse, Dan Licata, and Robert Harper. To appear, ICFP, Gothenburg, October 2014. We’re also preparing an expanded version with a new appendix containing material that didn’t make the cut for ICFP. (Why do […]

EclipseFP 2.6.1 released!

Planet Haskell - Tue, 07/29/2014 - 06:00
I've just released EclipseFP 2.6.1. EclipseFP is a set of Eclipse plugins for Haskell development. This is a bug fixing release, mainly for GHC 7.8 support.Release notes can be found here.As usual, download from Haskell Hacking!

CTO / Tech Co-Founder at Capital Match (Full-time)

Planet Haskell - Tue, 07/29/2014 - 06:00
TL;DR: start and build the technology for a financial services marketplace in Asia. Compensation is salary plus double digit percent equity. There will be a short trial period to make sure both sides want to work with each other. Relocation to Singapore mandatory (trial could be remote and part-time). Company Capital Match is bringing peer-to-peer lending (basically, a marketplace for retail/institutional lenders and corporate borrowers that bypasses the banking system) to Southeast Asia, where for various reasons the US incumbents have not entered. The founders are well connected and are bringing the right contacts and background to make it happen. The company started as a traditional financier for SMEs to better understand the market as well as legal and credit aspects of the business before it would embark on the P2P model. If you would like to learn more about the business model, here is a link explaining it from the point of view of current very successful US incumbents: Job description and compensation The CTO will first build the marketplace, then grow the team as it gains traction. We provide the legal, financial and admin functions as well as the market research backing a high level functional spec; you just need to worry about building the product. The division of labour will be very clear: you are the final call on anything technical, and nobody will come micromanage your work. Compensation will be a lowish middle class salary by Singapore standards and double digit percent equity, subject to a trial period. Note this is not a strictly technical business, and the marketplace problem is a relatively straightforward and well known one, with the value in the contacts and understanding of the market that goes into the functional spec. Though technology could bring a distinct value and advantage over time. Additionally, we have eschewed raising much funding for now and most of the capital comes from the founders' personal savings (which we think is a positive signal - our interests are aligned) so don't expect Silicon Valley perks for a while. We don't have hog roasts and whisky tasting Fridays, but you get a real, founder-level stake in the company. Relocation to Singapore is primordial for the CTO, although the rest of the team you'll build can be remote. During a trial period you can work remotely and part-time. Tech stack Thanks to one founder's very positive experiences with the Haskell experiment at Zalora, we are very keen to use functional programming languages, especially Haskell. We are however technology agnostic ("best stack for the problem"). We have a bias towards those who prefer the relational model over NoSQL and towards open source. Desired experience The CV matters less than your ability to build things, so please send us any major open source project you have authored, both a link to the repo and a "basic" description targeted at the non-technical founders. We would prefer to see some financial services experience, especially on the security side, and some experience building similar products would be even better. We want to utilize ample local government funding for high-tech start-ups so scientific / high-tech background and a post-grad degree would be preferred. You can attempt to apply without an open source repo to your name, in that case build us a demonstration of your skills that you think reflects your ability. Please send your application to pawel [at] capital-match [dot] com Get information on how to apply for this position. 2014-07-22T06:27:49Z

Parallelism and Concurrency, Revisited

Planet Haskell - Tue, 07/29/2014 - 06:00
To my delight, I still get compliments on and criticisms of my post from three years ago (can it possibly be that long?) on parallelism and concurrency.  In that post I offered a “top down” argument to the effect that these are different abstractions with different goals: parallelism is about exploiting computational resources to maximize efficiency, concurrency is […]

Meditations on Using Haskell

Planet Haskell - Tue, 07/29/2014 - 06:00
Bitemyapp - Meditations on Using Haskell explains why and how those in the trenches use Haskell, by quoting from conversations on an IRC channel.ESo when i found haskell i slingshotted off through dependent and substructural types. Assuming that if a little was good a lot was better. Made it half way through TaPL and found pure type systems, coq, etc.I think the power to weight ratio isn’t there. I find that Haskell gives amazingly expressive types that have amazingpower for the amount of code you tie up in them and that are very resistant to refactoring.If i write agda and refactor I scrap and rewrite everything. If i write haskell, and get my tricky logic bits right?I can refactor it, split things up into classes, play all the squishy software engineering games to get a nice API I want. And in the end if it still compiles I can trust I didn’t screw up the refactoring with a very high degree of assurance.CAdmittedly I’m not playing at the level E is, but this was my experience. I can make sweeping changes to my API, get all the bugs caught by the type system, and still have minimal code impact.BThat is what I was getting at with the tweet about not using dynamically typed langs because I need to be able to prototype quickly and get rapid feedback.I think a lot of my friends thought i was just being trollish. Even just being able to see what would have to change if you changed your design slightly and being able to back it out quickly…

That's totes my Bag!

Planet Haskell - Tue, 07/29/2014 - 06:00
So, does that mean I like tote-bags?So, today's question on @1HaskellADay was this:write a function countOccurences :: [Stirng] -> Map Char Int(typos faithfully reproduced)such thatlookup 'l' $ countOccurences "Hello" ~> Just 2lookup 'q' $ countOccurences "Hello" ~> NothingOkay, that can be done easily enough, I suppose, by torquing Map into something that it isn't, so one gets wrapped around the axel of creating a mapping from characters to occurrences.But why?First of all, countOccurences maps a String (not a List of Strings) to a Map, and that map is a very specialized kind of map that has existed in the literature for quite a while, and that map is known as the Bag data type, and is also, nowadays, called the MultiSet by people too embarrassed to say the word 'bag' in a sentence, because of their prior drug convictions.("I got two months for selling a dime bag.")So they now twist the word 'Set' (a 'collection of unique objects') to mean something that's not a set at all, the 'Multi'Set, which is a 'collection of unique objects, but you can have multiples of these unique objects, so they're not unique at all, so it isn't a set at all, but we need to say the word 'set' because we can't say the word 'bag' because saying the word 'bag' would make us sound plebeian for some reason.'Yeah, that. 'MultiSet.'What. Ev. Er.But I digress.As always.So I COULD write the countOccurences as a String -> Map Char Int function, but then: why bother? You can either write tons of algorithmic code that obscures the intent or just simply use the appropriate data type.I went for the latter.Now, I wuz gonna do a dependently-typed pair to represent an occurrence...... notice how countOccurences is so badly misspelled, by the way?SOMEbody didn't QA-check their problem for the day today, I'm thinking.... but then I said: 'eh!'I mean: WHY is lookup 'q' $ countOccurences "Hello" ~> Nothing?WHY can't it be that count 'q' for a Bag Char representation of "Hello" be 0? 0 is a valid answer and it keeps everything nice and monoidal without having to lift everything unnecessarily into the monadic domain.So, yeah. Let's do that, instead.So, here we go, and in Idris, because that's how I'm rolling these days. The advantages of dependent types have been enumerated elsewhere, so we'll just go with that they're better as an assumption and move on, using them, instead of extolling them, in this post.Wrong!So, my first attempt at Bag crashed and burned, because I did this:data Bag : (x : Type) -> Type where    add : Bag x -> x -> Bag x    emptyBag : Bag xand the compiler was fine with that. Hey, I can declare any type I'd like, so long as the types just stay as types, but as soon as I tried to define these things:emptyList : List xemptyList = []emptyBag = Bag emptyListadd (Bag []) x = Bag [(x, 1)]add (Bag ((x, y) :: rest)) x = Bag ((x, y + 1) :: rest)add (Bag ((z, y) :: rest)) x = Bag ((z, y) :: (add rest x))The compiler looked at me and asked: 'geophf, what in tarnation are you-ah tryin' to do?'And about the only intelligent answer I could muster was: 'Ummmm... idk.'I had gotten too clever for myself by half, trying to reshape a data type you learn in Comp.Sci. 101 as a purely functional type.Back to Basics ... (but not BASIC)So, let's just declare Bag to be what it is and KISS: 'keep it simple, stupid!'Yes, let' Bag x = Air | Stuffed (x, Nat) (Bag x)Now, I so totally could've gone with the balanced binary-tree representation instead of the simple and standard linked list, but, you know: 'live and learn!'With this declaration the emptyBag becomes so trivial as to be unnecessary, and then add is simplicity, itself, too, but add is, either way, so that's not saying much.add : Eq x => Bag x -> x -> Bag xadd Air x = Stuffed (x, 1) Airadd (Stuffed (z, y) rest) x =    case x == z of        True  => Stuffed (x, y + 1) rest        False => Stuffed (z, y) (add rest x)Now, you see me relying on the case-statement, here. Unhappily.I'd like my dependent types to say, 'unify x with x (reflexive) for the isomorphic case, and don't unify x with z for the other case.' But we're not there yet, or my coding isn't on par with being there yet, so I forced total coverage bifurcating the result-set into isomorphic and not with a hard-case statement.Ick. I hate explicit case-statements! Where is really, really, really smart pattern-matching when I need it?But with add, constructing a Bag becomes easy, and then counting elements of that bag is easy, too (again, with another case-statement, sigh!):count : Eq x => x -> Bag x -> Natcount _ Air = 0count x (Stuffed (z, y) rest) =    case x == z of        True  => y        False => count x restcountOccurences (with one-too-few 'r's in the function name) becomes easy, given the Bag data type:countOccurences : String -> Bag CharcountOccurences str = co' (unpack str) where    co' [] = Air    co' (char :: rest) = add (co' rest) charYAWN!But look at this:depth : Bag x -> Natdepth Air = 0depth (Stuffed _ rest) = 1 + depth restsample : ?bagsample = countOccurences "The quick, brown fox jumped over the lazy dog."bag = proof searchWhen we do a depth sample, we get the not-surprising answer of 29 : NatPerhaps this could be made a tad bit more efficient?Just perhaps.Well, then, let's do that!data Bag x = Air | Stuffed (x, Nat) (Bag x) (Bag x)We make Bag balanced, with the add-function, doing the work of (very simply) branching off new nodes:add : Ord x => Bag x -> x -> Bag xadd Air x = Stuffed (x, 1) Air Airadd (Stuffed (z, y) less more) x =    case (compare x z) of        LT => Stuffed (z, y) (add less x) more        GT => Stuffed (z, y) less (add more x)        EQ => Stuffed (z, y + 1) less moreThen all the other functions change ('morph') to work with a tree, not a list and work with Ord elements, not with (simply) Eq ones.And so, the redefined depth-function gives a very different result:depth sample ~> 9 : NatNot bad! Not bad! The improved data-structure improves efficiency across the board from O(N) to O(log N).Hm, perhaps I'll have count return a dependently-typed pair, just as the library function filter does on List types, but not tonight.Good night, Moon!

BuildWrapper/EclipseFP and GHC 7.8

Planet Haskell - Tue, 07/29/2014 - 06:00
I've been working on some issues related to GHC 7.8 in BuildWrapper and EclipseFP. On the EclipseFP side, mainly the quickfixes are affected, because EclipseFP parses the GHC error messages to offer them, and the quotes characters have changed in the GHC 7.8 messages.On the BuildWrapper side, things are more complex. Adapting to API changes wasn't a big deal, but it seems that GHC bugs involving the GHC API, static linking and other unknowns cause some things to break. The solution I've found was to build BuildWrapper with the -dynamic flag. But I couldn't upload this to hackage because Cabal thinks that -dynamic is a debug flag (it starts with d). I've sent a bug fix to Cabal, so in the next release that'll be fixed. So if you're using GHC 7.8 and BuildWrapper, you may want to rebuild the executable with -dynamic (uncomment the relevant line in the cabal file).Note: BuildWrapper comes with a comprehensive test suite (90 tests covering all aspects). So you can always build the tests and run them to ensure everyting is OK on your system.Happy Haskell Hacking!

Equational reasoning at scale

Planet Haskell - Tue, 07/29/2014 - 06:00
Haskell programmers care about the correctness of their software and they specify correctness conditions in the form of equations that their code must satisfy. They can then verify the correctness of these equations using equational reasoning to prove that the abstractions they build are sound. To an outsider this might seem like a futile, academic exercise: proving the correctness of small abstractions is difficult, so what hope do we have to prove larger abstractions correct? This post explains how to do precisely that: scale proofs to large and complex abstractions.Purely functional programming uses composition to scale programs, meaning that:We build small components that we can verify correct in isolationWe compose smaller components into larger componentsIf you saw "components" and thought "functions", think again! We can compose things that do not even remotely resemble functions, such as proofs! In fact, Haskell programmers prove large-scale properties exactly the same way we build large-scale programs:We build small proofs that we can verify correct in isolationWe compose smaller proofs into larger proofsThe following sections illustrate in detail how this works in practice, using Monoids as the running example. We will prove the Monoid laws for simple types and work our way up to proving the Monoid laws for much more complex types. Along the way we'll learn how to keep the proof complexity flat as the types grow in size.MonoidsHaskell's Prelude provides the following Monoid type class:class Monoid m where mempty :: m mappend :: m -> m -> m-- An infix operator equivalent to `mappend`(<>) :: Monoid m => m -> m -> mx <> y = mappend x y... and all Monoid instances must obey the following two laws:mempty <> x = x -- Left identityx <> mempty = x -- Right identity(x <> y) <> z = x <> (y <> z) -- AssociativityFor example, Ints form a Monoid:-- See "Appendix A" for some caveatsinstance Monoid Int where mempty = 0 mappend = (+)... and the Monoid laws for Ints are just the laws of addition:0 + x = xx + 0 = x(x + y) + z = x + (y + z)Now we can use (<>) and mempty instead of (+) and 0:>>> 4 <> 26>>> 5 <> mempty <> 510This appears useless at first glance. We already have (+) and 0, so why are we using the Monoid operations?Extending MonoidsWell, what if I want to combine things other than Ints, like pairs of Ints. I want to be able to write code like this:>>> (1, 2) <> (3, 4)(4, 6)Well, that seems mildly interesting. Let's try to define a Monoid instance for pairs of Ints:instance Monoid (Int, Int) where mempty = (0, 0) mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)Now my wish is true and I can "add" binary tuples together using (<>) and mempty:>>> (1, 2) <> (3, 4)(4, 6)>>> (1, 2) <> (3, mempty) <> (mempty, 4)(4, 6)>>> (1, 2) <> mempty <> (3, 4)(4, 6)However, I still haven't proven that this new Monoid instance obeys the Monoid laws. Fortunately, this is a very simple proof.I'll begin with the first Monoid law, which requires that:mempty <> x = xWe will begin from the left-hand side of the equation and try to arrive at the right-hand side by substituting equals-for-equals (a.k.a. "equational reasoning"):-- Left-hand side of the equationmempty <> x-- x <> y = mappend x y= mappend mempty x-- `mempty = (0, 0)`= mappend (0, 0) x-- Define: x = (xL, xR), since `x` is a tuple= mappend (0, 0) (xL, xR)-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)= (0 + xL, 0 + xR)-- 0 + x = x= (xL, xR)-- x = (xL, xR)= xThe proof for the second Monoid law is symmetric-- Left-hand side of the equation= x <> mempty-- x <> y = mappend x y= mappend x mempty-- mempty = (0, 0)= mappend x (0, 0)-- Define: x = (xL, xR), since `x` is a tuple= mappend (xL, xR) (0, 0)-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)= (xL + 0, xR + 0)-- x + 0 = x= (xL, xR)-- x = (xL, xR)= xThe third Monoid law requires that (<>) is associative:(x <> y) <> z = x <> (y <> z)Again I'll begin from the left side of the equation:-- Left-hand side(x <> y) <> z-- x <> y = mappend x y= mappend (mappend x y) z-- x = (xL, xR)-- y = (yL, yR)-- z = (zL, zR)= mappend (mappend (xL, xR) (yL, yR)) (zL, zR)-- mappend (x1, y1) (x2 , y2) = (x1 + x2, y1 + y2)= mappend (xL + yL, xR + yR) (zL, zR)-- mappend (x1, y1) (x2 , y2) = (x1 + x2, y1 + y2)= mappend ((xL + yL) + zL, (xR + yR) + zR)-- (x + y) + z = x + (y + z)= mappend (xL + (yL + zL), xR + (yR + zR))-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)= mappend (xL, xR) (yL + zL, yR + zR)-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)= mappend (xL, xR) (mappend (yL, yR) (zL, zR))-- x = (xL, xR)-- y = (yL, yR)-- z = (zL, zR)= mappend x (mappend y z)-- x <> y = mappend x y= x <> (y <> z)That completes the proof of the three Monoid laws, but I'm not satisfied with these proofs.Generalizing proofsI don't like the above proofs because they are disposable, meaning that I cannot reuse them to prove other properties of interest. I'm a programmer, so I loathe busy work and unnecessary repetition, both for code and proofs. I would like to find a way to generalize the above proofs so that I can use them in more places.We improve proof reuse in the same way that we improve code reuse. To see why, consider the following sort function:sort :: [Int] -> [Int]This sort function is disposable because it only works on Ints. For example, I cannot use the above function to sort a list of Doubles.Fortunately, programming languages with generics let us generalize sort by parametrizing sort on the element type of the list:sort :: Ord a => [a] -> [a]That type says that we can call sort on any list of as, so long as the type a implements the Ord type class (a comparison interface). This works because sort doesn't really care whether or not the elements are Ints; sort only cares if they are comparable.Similarly, we can make the proof more "generic". If we inspect the proof closely, we will notice that we don't really care whether or not the tuple contains Ints. The only Int-specific properties we use in our proof are:0 + x = xx + 0 = x(x + y) + z = x + (y + z)However, these properties hold true for all Monoids, not just Ints. Therefore, we can generalize our Monoid instance for tuples by parametrizing it on the type of each field of the tuple:instance (Monoid a, Monoid b) => Monoid (a, b) where mempty = (mempty, mempty) mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)The above Monoid instance says that we can combine tuples so long as we can combine their individual fields. Our original Monoid instance was just a special case of this instance where both the a and b types are Ints.Note: The mempty and mappend on the left-hand side of each equation are for tuples. The memptys and mappends on the right-hand side of each equation are for the types a and b. Haskell overloads type class methods like mempty and mappend to work on any type that implements the Monoid type class, and the compiler distinguishes them by their inferred types.We can similarly generalize our original proofs, too, by just replacing the Int-specific parts with their more general Monoid counterparts.Here is the generalized proof of the left identity law:-- Left-hand side of the equationmempty <> x-- x <> y = mappend x y= mappend mempty x-- `mempty = (mempty, mempty)`= mappend (mempty, mempty) x-- Define: x = (xL, xR), since `x` is a tuple= mappend (mempty, mempty) (xL, xR)-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)= (mappend mempty xL, mappend mempty xR)-- Monoid law: mappend mempty x = x= (xL, xR)-- x = (xL, xR)= x... the right identity law:-- Left-hand side of the equation= x <> mempty-- x <> y = mappend x y= mappend x mempty-- mempty = (mempty, mempty)= mappend x (mempty, mempty)-- Define: x = (xL, xR), since `x` is a tuple= mappend (xL, xR) (mempty, mempty)-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)= (mappend xL mempty, mappend xR mempty)-- Monoid law: mappend x mempty = x= (xL, xR)-- x = (xL, xR)= x... and the associativity law:-- Left-hand side(x <> y) <> z-- x <> y = mappend x y= mappend (mappend x y) z-- x = (xL, xR)-- y = (yL, yR)-- z = (zL, zR)= mappend (mappend (xL, xR) (yL, yR)) (zL, zR)-- mappend (x1, y1) (x2 , y2) = (mappend x1 x2, mappend y1 y2)= mappend (mappend xL yL, mappend xR yR) (zL, zR)-- mappend (x1, y1) (x2 , y2) = (mappend x1 x2, mappend y1 y2)= (mappend (mappend xL yL) zL, mappend (mappend xR yR) zR)-- Monoid law: mappend (mappend x y) z = mappend x (mappend y z)= (mappend xL (mappend yL zL), mappend xR (mappend yR zR))-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)= mappend (xL, xR) (mappend yL zL, mappend yR zR)-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)= mappend (xL, xR) (mappend (yL, yR) (zL, zR))-- x = (xL, xR)-- y = (yL, yR)-- z = (zL, zR)= mappend x (mappend y z)-- x <> y = mappend x y= x <> (y <> z)This more general Monoid instance lets us stick any Monoids inside the tuple fields and we can still combine the tuples. For example, lists form a Monoid:-- Exercise: Prove the monoid laws for listsinstance Monoid [a] where mempty = [] mappend = (++)... so we can stick lists inside the right field of each tuple and still combine them:>>> (1, [2, 3]) <> (4, [5, 6])(5, [2, 3, 5, 6])>>> (1, [2, 3]) <> (4, mempty) <> (mempty, [5, 6])(5, [2, 3, 5, 6])>>> (1, [2, 3]) <> mempty <> (4, [5, 6])(5, [2, 3, 5, 6])Why, we can even stick yet another tuple inside the right field and still combine them:>>> (1, (2, 3)) <> (4, (5, 6))(5, (7, 9))We can try even more exotic permutations and everything still "just works":>>> ((1,[2, 3]), ([4, 5], 6)) <> ((7, [8, 9]), ([10, 11), 12)((8, [2, 3, 8, 9]), ([4, 5, 10, 11], 18))This is our first example of a "scalable proof". We began from three primitive building blocks:Int is a Monoid[a] is a Monoid(a, b) is a Monoid if a is a Monoid and b is a Monoid... and we connected those three building blocks to assemble a variety of new Monoid instances. No matter how many tuples we nest the result is still a Monoid and obeys the Monoid laws. We don't need to re-prove the Monoid laws every time we assemble a new permutation of these building blocks.However, these building blocks are still pretty limited. What other useful things can we combine to build new Monoids?IOWe're so used to thinking of Monoids as data, so let's define a new Monoid instance for something entirely un-data-like:-- See "Appendix A" for some caveatsinstance Monoid b => Monoid (IO b) where mempty = return mempty mappend io1 io2 = do a1 <- io1 a2 <- io2 return (mappend a1 a2)The above instance says: "If a is a Monoid, then an IO action that returns an a is also a Monoid". Let's test this using the getLine function from the Prelude:-- Read one line of input from stdingetLine :: IO StringString is a Monoid, since a String is just a list of characters, so we should be able to mappend multiple getLine statements together. Let's see what happens:>>> getLine -- Reads one line of inputHello"Hello">>> getLine <> getLineABCDEF"ABCDEF">>> getLine <> getLine <> getLine123456"123456"Neat! When we combine multiple commands we combine their effects and their results.Of course, we don't have to limit ourselves to reading strings. We can use readLn from the Prelude to read in anything that implements the Read type class:-- Parse a `Read`able value from one line of stdinreadLn :: Read a => IO aAll we have to do is tell the compiler which type a we intend to Read by providing a type signature:>>> readLn :: IO (Int, Int)(1, 2)(1 ,2)>>> readLn <> readLn :: IO (Int, Int)(1,2)(3,4)(4,6)>>> readLn <> readLn <> readLn :: IO (Int, Int)(1,2)(3,4)(5,6)(9,12)This works because:Int is a MonoidTherefore, (Int, Int) is a MonoidTherefore, IO (Int, Int) is a MonoidOr let's flip things around and nest IO actions inside of a tuple:>>> let ios = (getLine, readLn) :: (IO String, IO (Int, Int))>>> let (getLines, readLns) = ios <> ios <> ios>>> getLines123456123456>>> readLns(1,2)(3,4)(5,6)(9,12)We can very easily reason that the type (IO String, IO (Int, Int)) obeys the Monoid laws because:String is a MonoidIf String is a Monoid then IO String is also a MonoidInt is a MonoidIf Int is a Monoid, then (Int, Int) is also a `MonoidIf (Int, Int) is a Monoid, then IO (Int, Int) is also a MonoidIf IO String is a Monoid and IO (Int, Int) is a Monoid, then (IO String, IO (Int, Int)) is also a MonoidHowever, we don't really have to reason about this at all. The compiler will automatically assemble the correct Monoid instance for us. The only thing we need to verify is that the primitive Monoid instances obey the Monoid laws, and then we can trust that any larger Monoid instance the compiler derives will also obey the Monoid laws.The Unit MonoidHaskell Prelude also provides the putStrLn function, which echoes a String to standard output with a newline:putStrLn :: String -> IO ()Is putStrLn combinable? There's only one way to find out!>>> putStrLn "Hello" <> putStrLn "World"HelloWorldInteresting, but why does that work? Well, let's look at the types of the commands we are combining:putStrLn "Hello" :: IO ()putStrLn "World" :: IO ()Well, we said that IO b is a Monoid if b is a Monoid, and b in this case is () (pronounced "unit"), which you can think of as an "empty tuple". Therefore, () must form a Monoid of some sort, and if we dig into Data.Monoid, we will discover the following Monoid instance:-- Exercise: Prove the monoid laws for `()`instance Monoid () where mempty = () mappend () () = ()This says that empty tuples form a trivial Monoid, since there's only one possible value (ignoring bottom) for an empty tuple: (). Therefore, we can derive that IO () is a Monoid because () is a Monoid.FunctionsAlright, so we can combine putStrLn "Hello" with putStrLn "World", but can we combine naked putStrLn functions?>>> (putStrLn <> putStrLn) "Hello"HelloHelloWoah, how does that work?We never wrote a Monoid instance for the type String -> IO (), yet somehow the compiler magically accepted the above code and produced a sensible result.This works because of the following Monoid instance for functions:instance Monoid b => Monoid (a -> b) where mempty = \_ -> mempty mappend f g = \a -> mappend (f a) (g a)This says: "If b is a Monoid, then any function that returns a b is also a Monoid".The compiler then deduced that:() is a MonoidIf () is a Monoid, then IO () is also a MonoidIf IO () is a Monoid then String -> IO () is also a MonoidThe compiler is a trusted friend, deducing Monoid instances we never knew existed.Monoid pluginsNow we have enough building blocks to assemble a non-trivial example. Let's build a key logger with a Monoid-based plugin system.The central scaffold of our program is a simple main loop that echoes characters from standard input to standard output:main = do hSetEcho stdin False forever $ do c <- getChar putChar cHowever, we would like to intercept key strokes for nefarious purposes, so we will slightly modify this program to install a handler at the beginning of the program that we will invoke on every incoming character:install :: IO (Char -> IO ())install = ???main = do hSetEcho stdin False handleChar <- install forever $ do c <- getChar handleChar c putChar cNotice that the type of install is exactly the correct type to be a Monoid:() is a MonoidTherefore, IO () is also a MonoidTherefore Char -> IO () is also a MonoidTherefore IO (Char -> IO ()) is also a MonoidTherefore, we can combine key logging plugins together using Monoid operations. Here is one such example:type Plugin = IO (Char -> IO ())logTo :: FilePath -> PluginlogTo filePath = do handle <- openFile filePath WriteMode return (hPutChar handle)main = do hSetEcho stdin False handleChar <- logTo "file1.txt" <> logTo "file2.txt" forever $ do c <- getChar handleChar c putChar cNow, every key stroke will be recorded to both file1.txt and file2.txt. Let's confirm that this works as expected:$ ./loggerTestABC42$ cat file1.txtTestABC42$ cat file2.txtTestABC42Try writing your own Plugins and mixing them in with (<>) to see what happens. "Appendix C" contains the complete code for this section so you can experiment with your own Plugins.ApplicativesNotice that I never actually proved the Monoid laws for the following two Monoid instances:instance Monoid b => Monoid (a -> b) where mempty = \_ -> mempty mappend f g = \a -> mappend (f a) (g a)instance Monoid a => Monoid (IO a) where mempty = return mempty mappend io1 io2 = do a1 <- io1 a2 <- io2 return (mappend a1 a2)The reason why is that they are both special cases of a more general pattern. We can detect the pattern if we rewrite both of them to use the pure and liftA2 functions from Control.Applicative:import Control.Applicative (pure, liftA2)instance Monoid b => Monoid (a -> b) where mempty = pure mempty mappend = liftA2 mappendinstance Monoid b => Monoid (IO b) where mempty = pure mempty mappend = liftA2 mappendThis works because both IO and functions implement the following Applicative interface:class Functor f => Applicative f where pure :: a -> f a (<*>) :: f (a -> b) -> f a -> f b-- Lift a binary function over the functor `f`liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f cliftA2 f x y = (pure f <*> x) <*> y... and all Applicative instances must obey several Applicative laws:pure id <*> v = v((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)pure f <*> pure x = pure (f x)u <*> pure x = pure (\f -> f y) <*> uThese laws may seem a bit adhoc, but this paper explains that you can reorganize the Applicative class to this equivalent type class:class Functor f => Monoidal f where unit :: f () (#) :: f a -> f b -> f (a, b)Then the corresponding laws become much more symmetric:fmap snd (unit # x) = x -- Left identityfmap fst (x # unit) = x -- Right identityfmap assoc ((x # y) # z) = x # (y # z) -- Associativity where assoc ((a, b), c) = (a, (b, c))fmap (f *** g) (x # y) = fmap f x # fmap g y -- Naturality where (f *** g) (a, b) = (f a, g b)I personally prefer the Monoidal formulation, but you go to war with the army you have, so we will use the Applicative type class for this post.All Applicatives possess a very powerful property: they can all automatically lift Monoid operations using the following instance:instance (Applicative f, Monoid b) => Monoid (f b) where mempty = pure mempty mappend = liftA2 mappendThis says: "If f is an Applicative and b is a Monoid, then f b is also a Monoid." In other words, we can automatically extend any existing Monoid with some new feature f and get back a new Monoid.Note: The above instance is bad Haskell because it overlaps with other type class instances. In practice we have to duplicate the above code once for each Applicative. Also, for some Applicatives we may want a different Monoid instance.We can prove that the above instance obeys the Monoid laws without knowing anything about f and b, other than the fact that f obeys the Applicative laws and b obeys the Applicative laws. These proofs are a little long, so I've included them in Appendix B.Both IO and functions implement the Applicative type class:instance Applicative IO where pure = return iof <*> iox = do f <- iof x <- iox return (f x)instance Applicative ((->) a) where pure x = \_ -> x kf <*> kx = \a -> let f = kf a x = kx a in f xThis means that we can kill two birds with one stone. Every time we prove the Applicative laws for some functor F:instance Applicative F where ...... we automatically prove that the following Monoid instance is correct for free:instance Monoid b => Monoid (F b) where mempty = pure mempty mappend = liftA2 mappendIn the interest of brevity, I will skip the proofs of the Applicative laws, but I may cover them in a subsequent post.The beauty of Applicative Functors is that every new Applicative instance we discover adds a new building block to our Monoid toolbox, and Haskell programmers have already discovered lots of Applicative Functors.Revisiting tuplesOne of the very first Monoid instances we wrote was:instance (Monoid a, Monoid b) => Monoid (a, b) where mempty = (mempty, mempty) mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)Check this out:instance (Monoid a, Monoid b) => Monoid (a, b) where mempty = pure mempty mappend = liftA2 mappendThis Monoid instance is yet another special case of the Applicative pattern we just covered!This works because of the following Applicative instance in Control.Applicative:instance Monoid a => Applicative ((,) a) where pure b = (mempty, b) (a1, f) <*> (a2, x) = (mappend a1 a2, f x)This instance obeys the Applicative laws (proof omitted), so our Monoid instance for tuples is automatically correct, too.Composing applicativesIn the very first section I wrote:Haskell programmers prove large-scale properties exactly the same way we build large-scale programs:We build small proofs that we can verify correct in isolationWe compose smaller proofs into larger proofsI don't like to use the word compose lightly. In the context of category theory, compose has a very rigorous meaning, indicating composition of morphisms in some category. This final section will show that we can actually compose Monoid proofs in a very rigorous sense of the word.We can define a category of Monoid proofs:Objects are types and their associated Monoid proofsMorphisms are Applicative FunctorsThe identity morphism is the Identity applicativeThe composition operation is composition of Applicative FunctorsThe category laws are isomorphisms instead of equalitiesSo in our Plugin example, we began on the proof that () was a Monoid and then composed three Applicative morphisms to prove that Plugin was a Monoid. I will use the following diagram to illustrate this:+-----------------------+| || Legend: * = Object || || v || | = Morphism || v || |+-----------------------+* `()` is a `Monoid`v| IOv* `IO ()` is a `Monoid`v| ((->) String)v* `String -> IO ()` is a `Monoid`v| IOv* `IO (String -> IO ())` (i.e. `Plugin`) is a `Monoid`Therefore, we were literally composing proofs together.ConclusionYou can equationally reason at scale by decomposing larger proofs into smaller reusable proofs, the same way we decompose programs into smaller and more reusable components. There is no limit to how many proofs you can compose together, and therefore there is no limit to how complex of a program you can tame using equational reasoning.This post only gave one example of composing proofs within Haskell. The more you learn the language, the more examples of composable proofs you will encounter. Another common example is automatically deriving Monad proofs by composing monad transformers.As you learn Haskell, you will discover that the hard part is not proving things. Rather, the challenge is learning how to decompose proofs into smaller proofs and you can cultivate this skill by studying category theory and abstract algebra. These mathematical disciplines teach you how to extract common and reusable proofs and patterns from what appears to be disposable and idiosyncratic code.Appendix A - Missing Monoid instancesThese Monoid instance from this post do not actually appear in the Haskell standard library:instance Monoid b => Monoid (IO b)instance Monoid IntThe first instance was recently proposed here on the Glasgow Haskell Users mailing list. However, in the short term you can work around it by writing your own Monoid instances by hand just by inserting a sufficient number of pures and liftA2s.For example, suppose we wanted to provide a Monoid instance for Plugin. We would just newtype Plugin and write:newtype Plugin = Plugin { install :: IO (String -> IO ()) }instance Monoid Plugin where mempty = Plugin (pure (pure (pure mempty))) mappend (Plugin p1) (Plugin p2) = Plugin (liftA2 (liftA2 (liftA2 mappend)) p1 p2)This is what the compiler would have derived by hand.Alternatively, you could define an orphan Monoid instance for IO, but this is generally frowned upon.There is no default Monoid instance for Int because there are actually two possible instances to choose from:-- Alternative #1instance Monoid Int where mempty = 0 mappend = (+)-- Alternative #2instance Monoid Int where mempty = 1 mappend = (*)So instead, Data.Monoid sidesteps the issue by providing two newtypes to distinguish which instance we prefer:newtype Sum a = Sum { getSum :: a }instance Num a => Monoid (Sum a)newtype Product a = Product { getProduct :: a}instance Num a => Monoid (Product a)An even better solution is to use a semiring, which allows two Monoid instances to coexist for the same type. You can think of Haskell's Num class as an approximation of the semiring class:class Num a where fromInteger :: Integer -> a (+) :: a -> a -> a (*) :: a -> a -> a -- ... and other operations unrelated to semiringsNote that we can also lift the Num class over the Applicative class, exactly the same way we lifted the Monoid class. Here's the code:instance (Applicative f, Num a) => Num (f a) where fromInteger n = pure (fromInteger n) (+) = liftA2 (+) (*) = liftA2 (*) (-) = liftA2 (-) negate = fmap negate abs = fmap abs signum = fmap signumThis lifting guarantees that if a obeys the semiring laws then so will f a. Of course, you will have to specialize the above instance to every concrete Applicative because otherwise you will get overlapping instances.Appendix BThese are the proofs to establish that the following Monoid instance obeys the Monoid laws:instance (Applicative f, Monoid b) => Monoid (f b) where mempty = pure mempty mappend = liftA2 mappend... meaning that if f obeys the Applicative laws and b obeys the Monoid laws, then f b also obeys the Monoid laws.Proof of the left identity law:mempty <> x-- x <> y = mappend x y= mappend mempty x-- mappend = liftA2 mappend= liftA2 mappend mempty x-- mempty = pure mempty= liftA2 mappend (pure mempty) x-- liftA2 f x y = (pure f <*> x) <*> y= (pure mappend <*> pure mempty) <*> x-- Applicative law: pure f <*> pure x = pure (f x)= pure (mappend mempty) <*> x-- Eta conversion= pure (\a -> mappend mempty a) <*> x-- mappend mempty x = x= pure (\a -> a) <*> x-- id = \x -> x= pure id <*> x-- Applicative law: pure id <*> v = v= xProof of the right identity law:x <> mempty = x-- x <> y = mappend x y= mappend x mempty-- mappend = liftA2 mappend= liftA2 mappend x mempty-- mempty = pure mempty= liftA2 mappend x (pure mempty)-- liftA2 f x y = (pure f <*> x) <*> y= (pure mappend <*> x) <*> pure mempty-- Applicative law: u <*> pure y = pure (\f -> f y) <*> u= pure (\f -> f mempty) <*> (pure mappend <*> x)-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= ((pure (.) <*> pure (\f -> f mempty)) <*> pure mappend) <*> x-- Applicative law: pure f <*> pure x = pure (f x)= (pure ((.) (\f -> f mempty)) <*> pure mappend) <*> x-- Applicative law : pure f <*> pure x = pure (f x)= pure ((.) (\f -> f mempty) mappend) <*> x-- `(.) f g` is just prefix notation for `f . g`= pure ((\f -> f mempty) . mappend) <*> x-- f . g = \x -> f (g x)= pure (\x -> (\f -> f mempty) (mappend x)) <*> x-- Apply the lambda= pure (\x -> mappend x mempty) <*> x-- Monoid law: mappend x mempty = x= pure (\x -> x) <*> x-- id = \x -> x= pure id <*> x-- Applicative law: pure id <*> v = v= xProof of the associativity law:(x <> y) <> z-- x <> y = mappend x y= mappend (mappend x y) z-- mappend = liftA2 mappend= liftA2 mappend (liftA2 mappend x y) z-- liftA2 f x y = (pure f <*> x) <*> y= (pure mappend <*> ((pure mappend <*> x) <*> y)) <*> z-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= (((pure (.) <*> pure mappend) <*> (pure mappend <*> x)) <*> y) <*> z-- Applicative law: pure f <*> pure x = pure (f x)= ((pure f <*> (pure mappend <*> x)) <*> y) <*> z where f = (.) mappend-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= ((((pure (.) <*> pure f) <*> pure mappend) <*> x) <*> y) <*> z where f = (.) mappend-- Applicative law: pure f <*> pure x = pure (f x)= (((pure f <*> pure mappend) <*> x) <*> y) <*> z where f = (.) ((.) mappend)-- Applicative law: pure f <*> pure x = pure (f x)= ((pure f <*> x) <*> y) <*> z where f = (.) ((.) mappend) mappend-- (.) f g = f . g= ((pure f <*> x) <*> y) <*> z where f = ((.) mappend) . mappend-- Eta conversion= ((pure f <*> x) <*> y) <*> z where f x = (((.) mappend) . mappend) x-- (f . g) x = f (g x)= ((pure f <*> x) <*> y) <*> z where f x = (.) mappend (mappend x)-- (.) f g = f . g= ((pure f <*> x) <*> y) <*> z where f x = mappend . (mappend x)-- Eta conversion= ((pure f <*> x) <*> y) <*> z where f x y = (mappend . (mappend x)) y-- (f . g) x = f (g x)= ((pure f <*> x) <*> y) <*> z where f x y = mappend (mappend x y)-- Eta conversion= ((pure f <*> x) <*> y) <*> z where f x y z = mappend (mappend x y) z-- Monoid law: mappend (mappend x y) z = mappend x (mappend y z)= ((pure f <*> x) <*> y) <*> z where f x y z = mappend x (mappend y z)-- (f . g) x = f (g x)= ((pure f <*> x) <*> y) <*> z where f x y z = (mappend x . mappend y) z-- Eta conversion= ((pure f <*> x) <*> y) <*> z where f x y = mappend x . mappend y-- (.) f g = f . g= ((pure f <*> x) <*> y) <*> z where f x y = (.) (mappend x) (mappend y)-- (f . g) x = f= ((pure f <*> x) <*> y) <*> z where f x y = (((.) . mappend) x) (mappend y)-- (f . g) x = f (g x)= ((pure f <*> x) <*> y) <*> z where f x y = ((((.) . mappend) x) . mappend) y-- Eta conversion= ((pure f <*> x) <*> y) <*> z where f x = (((.) . mappend) x) . mappend-- (.) f g = f . g= ((pure f <*> x) <*> y) <*> z where f x = (.) (((.) . mappend) x) mappend-- Lambda abstraction= ((pure f <*> x) <*> y) <*> z where f x = (\k -> k mappend) ((.) (((.) . mappend) x))-- (f . g) x = f (g x)= ((pure f <*> x) <*> y) <*> z where f x = (\k -> k mappend) (((.) . ((.) . mappend)) x)-- Eta conversion= ((pure f <*> x) <*> y) <*> z where f = (\k -> k mappend) . ((.) . ((.) . mappend))-- (.) f g = f . g= ((pure f <*> x) <*> y) <*> z where f = (.) (\k -> k mappend) ((.) . ((.) . mappend))-- Applicative law: pure f <*> pure x = pure (f x)= (((pure g <*> pure f) <*> x) <*> y) <*> z where g = (.) (\k -> k mappend) f = (.) . ((.) . mappend)-- Applicative law: pure f <*> pure x = pure (f x)= ((((pure (.) <*> pure (\k -> k mappend)) <*> pure f) <*> x) <*> y) <*> z where f = (.) . ((.) . mappend)-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= ((pure (\k -> k mappend) <*> (pure f <*> x)) <*> y) <*> z where f = (.) . ((.) . mappend)-- u <*> pure y = pure (\k -> k y) <*> u= (((pure f <*> x) <*> pure mappend) <*> y) <*> z where f = (.) . ((.) . mappend)-- (.) f g = f . g= (((pure f <*> x) <*> pure mappend) <*> y) <*> z where f = (.) (.) ((.) . mappend)-- Applicative law: pure f <*> pure x = pure (f x)= ((((pure g <*> pure f) <*> x) <*> pure mappend) <*> y) <*> z where g = (.) (.) f = (.) . mappend-- Applicative law: pure f <*> pure x = pure (f x)= (((((pure (.) <*> pure (.)) <*> pure f) <*> x) <*> pure mappend) <*> y) <*> z where f = (.) . mappend-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= (((pure (.) <*> (pure f <*> x)) <*> pure mappend) <*> y) <*> z where f = (.) . mappend-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= ((pure f <*> x) <*> (pure mappend <*> y)) <*> z where f = (.) . mappend-- (.) f g = f . g= ((pure f <*> x) <*> (pure mappend <*> y)) <*> z where f = (.) (.) mappend-- Applicative law: pure f <*> pure x = pure (f x)= (((pure f <*> pure mappend) <*> x) <*> (pure mappend <*> y)) <*> z where f = (.) (.)-- Applicative law: pure f <*> pure x = pure (f x)= ((((pure (.) <*> pure (.)) <*> pure mappend) <*> x) <*> (pure mappend <*> y)) <*> z-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= ((pure (.) <*> (pure mappend <*> x)) <*> (pure mappend <*> y)) <*> z-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)= (pure mappend <*> x) <*> ((pure mappend <*> y) <*> z)-- liftA2 f x y = (pure f <*> x) <*> y= liftA2 mappend x (liftA2 mappend y z)-- mappend = liftA2 mappend= mappend x (mappend y z)-- x <> y = mappend x y= x <> (y <> z)Appendix C: Monoid key loggingHere is the complete program for a key logger with a Monoid-based plugin system:import Control.Applicative (pure, liftA2)import Control.Monad (forever)import Data.Monoidimport System.IOinstance Monoid b => Monoid (IO b) where mempty = pure mempty mappend = liftA2 mappendtype Plugin = IO (Char -> IO ())logTo :: FilePath -> PluginlogTo filePath = do handle <- openFile filePath WriteMode return (hPutChar handle)main = do hSetEcho stdin False handleChar <- logTo "file1.txt" <> logTo "file2.txt" forever $ do c <- getChar handleChar c putChar c

Similarity analysis of quilt blocks

Planet Haskell - Tue, 07/29/2014 - 06:00
As I've discussed elsewhere, I once wrote a program to enumerate all the possible quilt blocks of a certain type. The quilt blocks in question are, in quilt jargon, sixteen-patch half-square triangles. A half-square triangle, also called a “patch”, is two triangles of fabric sewn together, like this: Then you sew four of these patches into a four-patch, say like this: Then to make a sixteen-patch block of the type I was considering, you take four identical four-patch blocks, and sew them together with rotational symmetry, like this: It turns out that there are exactly 72 different ways to do this. (Blocks equivalent under a reflection are considered the same, as are blocks obtained by exchanging the roles of black and white, which are merely stand-ins for arbitrary colors to be chosen later.) Here is the complete set of 72: It's immediately clear that some of these resemble one another, sometimes so strongly that it can be hard to tell how they differ, while others are very distinctive and unique-seeming. I wanted to make the computer classify the blocks on the basis of similarity. My idea was to try to find a way to get the computer to notice which blocks have distinctive components of one color. For example, many blocks have a distinctive diamond shape in the center. Some have a pinwheel like this: which also has the diamond in the middle, while others have a different kind of pinwheel with no diamond: I wanted to enumerate such components and ask the computer to list which blocks contained which shapes; then group them by similarity, the idea being that that blocks with the same distinctive components are similar. The program suite uses a compact notation of blocks and of shapes that makes it easy to figure out which blocks contain which distinctive components. Since each block is made of four identical four-patches, it's enough just to examine the four-patches. Each of the half-square triangle patches can be oriented in two ways:    Here are two of the 12 ways to orient the patches in a four-patch:    Each 16-patch is made of four four-patches, and you must imagine that the four-patches shown above are in the upper-left position in the 16-patch. Then symmetry of the 16-patch block means that triangles with the same label are in positions that are symmetric with respect to the entire block. For example, the two triangles labeled b are on opposite sides of the block's northwest-southeast diagonal. But there is no symmetry of the full 16-patch block that carries triangle d to triangle g, because d is on the edge of the block, while g is in the interior. Triangles must be colored opposite colors if they are part of the same patch, but other than that there are no constraints on the coloring. A block might, of course, have patches in both orientations: All the blocks with diagonals oriented this way are assigned descriptors made from the letters bbdefgii. Once you have chosen one of the 12 ways to orient the diagonals in the four-patch, you still have to color the patches. A descriptor like bbeeffii describes the orientation of the diagonal lines in the squares, but it does not describe the way the four patches are colored; there are between 4 and 8 ways to color each sort of four-patch. For example, the bbeeffii four-patch shown earlier can be colored in six different ways:                In each case, all four diagonals run from northwest to southeast. (All other ways of coloring this four-patch are equivalent to one of these under one or more of rotation, reflection, and exchange of black and white.) We can describe a patch by listing the descriptors of the eight triangles, grouped by which triangles form connected regions. For example, the first block above is:    b/bf/ee/fi/i because there's an isolated white b triangle, then a black parallelogram made of a b and an f patch, then a white triangle made from the two white e triangles, then another parallelogram made from the black f and i, and finally in the middle, the white i. (The two white e triangles appear to be separated, but when four of these four-patches are joined into a 16-patch block, the two white e patches will be adjacent and will form a single large triangle: ) The other five bbeeffii four-patches are, in the same order they are shown above: b/b/e/e/f/f/i/i b/b/e/e/fi/fi b/bfi/ee/f/i bfi/bfi/e/e bf/bf/e/e/i/i All six have bbeeffii, but grouped differently depending on the colorings. The second one ( b/b/e/e/f/f/i/i) has no regions with more than one triangle; the fifth ( bfi/bfi/e/e) has two large regions of three triangles each, and two isolated triangles. In the latter four-patch, the bfi in the descriptor has three letters because the patch has a corresponding distinctive component made of three triangles. I made up a list of the descriptors for all 72 blocks; I think I did this by hand. (The work directory contains a blocks file that maps blocks to their descriptors, but the Makefile does not say how to build it, suggesting that it was not automatically built.) From this list one can automatically extract a list of descriptors of interesting shapes: an interesting shape is two or more letters that appear together in some descriptor. (Or it can be the single letter j, which is exceptional; see below.) For example, bffh represents a distinctive component. It can only occur in a patch that has a b, two fs, and an h, like this one: and it will only be significant if the b, the two fs, and the h are the same color: in which case you get this distinctive and interesting-looking hook component. There is only one block that includes this distinctive hook component; it has descriptor b/bffh/ee/j, and looks like this: . But some of the distinctive components are more common. The ee component represents the large white half-diamonds on the four sides. A block with "ee" in its descriptor always looks like this: and the blocks formed from such patches always have a distinctive half-diamond component on each edge, like this: (The stippled areas vary from block to block, but the blocks with ee in their descriptors always have the half-diamonds as shown.) The blocks listed at all have the ee component. There are many differences between them, but they all have the half-diamonds in common. Other distinctive components have similar short descriptors. The two pinwheels I mentioned above are gh and fi, respectively; if you look at the list of gh blocks and the list of fi blocks you'll see all the blocks with each kind of pinwheel. Descriptor j is an exception. It makes an interesting shape all by itself, because any block whose patches have j in their descriptor will have a distinctive-looking diamond component in the center. The four-patch looks like this: so the full sixteen-patch looks like this: where the stippled parts can vary. A look at the list of blocks with component j will confirm that they all have this basic similarity. I had made a list of the descriptors for each of the the 72 blocks, and from this I extracted a list of the descriptors for interesting component shapes. Then it was only a matter of finding the component descriptors in the block descriptors to know which blocks contained which components; if the two blocks share two different distinctive components, they probably look somewhat similar. Then I sorted the blocks into groups, where two blocks were in the same group if they shared two distinctive components. The resulting grouping lists, for each block, which other blocks have at least two shapes in common with it. Such blocks do indeed tend to look quite similar. This strategy was actually the second thing I tried; the first thing didn't work out well. (I forget just what it was, but I think it involved finding polygons in each block that had white inside and black outside, or vice versa.) I was satisfied enough with this second attempt that I considered the project a success and stopped work on it. The complete final results were: This tabulation of blocks that are somewhat similar This tabulation of blocks that are distinctly similar (This is the final product; I consider this a sufficiently definitive listing of “similar blocks”.) This tabulation of blocks that are extremely similar And these tabulations of all the blocks with various distinctive components: bd bf bfh bfi cd cdd cdf cf cfi ee eg egh egi fgh fh fi gg ggh ggi gh gi j It may also be interesting to browse the work directory.

Fun with (Kalman) Filters Part I

Planet Haskell - Tue, 07/29/2014 - 06:00
Suppose we wish to estimate the mean of a sample drawn from a normal distribution. In the Bayesian approach, we know the prior distribution for the mean (it could be a non-informative prior) and then we update this with our … Continue reading →
Syndicate content