Functional Programming

Out of the hospital

Planet Haskell - 16 hours 31 min ago
I woke up feeling terrible last monday, and by midnight I was on a bed in the ER. Spent the next few days in the hospital: had surgery on wednesday, got released on thursday. Since then I've been resting about the house, and have been recovering quickly. It was my first "real" surgery. (I've had other surgical procedures, but they aren't the sort that doctors mean when they ask if you've "had any surgeries".) Before saying what I had done, I'd like to make a point about social awareness. Do you recognize these symptoms? sharp shooting pain in the chest, possibly extending to shoulders and neck/throat, lightheadedness/dizziness, shortness of breath. dark urine, yellowing skin/eyes, nausea/vomiting, difficulty concentrating, sleepiness. urinating often, increased thirst/hunger, blurry vision, abrasions heal slowly, tingling/pain/numbness in hands/feet. dull pain in the pit of your stomach, possibly extending to back or right shoulder, possibly increasing after eating fatty foots, doesn't abate in different positions, fever and chills. This day and age I'd expect any moderately educated person to recognize the first three immediately: heart disease, liver disease, and diabetes (type 2). Both heart disease and diabetes have had powerful ad campaigns to increase awareness. Liver disease hasn't had that advantage, but the symptoms of jaundice are mentioned in the side-effect reports of most medications, and they're pretty memorable to boot. The last one I never would have recognized until it happened to me. And, frankly, the ER doctors had a hell of a time figuring out what it might be based only on my symptoms. I felt feverish at the time, though my temperature was normal. This was the first out-and-out attack, so I couldn't talk about how often it happened nor say whether it got worse after eating fatty foods. Knowing all the symptoms now, I can look back and see that this has been a long time coming; but at the moment all I could tell the docs was: intense dull pain in the pit of my stomach, doesn't get better or worse when changing position. These are the symptoms of gallbladder disease. Women are more than twice as likely as men to get it. Women on hormone replacement therapy are more likely to get it. Many women are hit with it during the end of pregnancy— so many so that nurses remark on the number of cholecystectomy patients with one-week old babies. There's something about estrogen (or fluctuations thereof) that doesn't play nicely with the gallbladder. So I mention this for all the women, especially trans women, in the audience. Of course, none of this is to say that there aren't plenty of men who suffer from the same. Prior to going to the ER I'd heard almost nothing about gallbladder disease, other than knowing that gallstones were a thing. But in the scarce week since then I've lost track of how many women have told me about their cholecystectomies. With how common it is, I think this is one of those diseases that we as a society should be able to recognize— like diabetes, heart attacks, and jaundice. So yeah, I'm down an organ now. There aren't too many side effects of having your gallbladder removed (compared to removing other organs), though it does mean I'll have to watch my diet. I've been doing that anyways, now it's just different things to look for. I'll have to put the high-protein low-carb diet on hold for a couple months, since I need to reduce fat intake until my body gets used to the new me. Also worth noting: apparently losing weight quickly (as with the 30-pounds I dropped last fall) can increase the risk of gallstones. So if you're dropping weight, you should be sure to monitor things and try to flush/cleanse your gallbladder. That's it for now. Goodnight and good health. comments

Lens is unidiomatic Haskell

Planet Haskell - 16 hours 31 min ago
Edward Kmett writes: Ironically if I had to say anything from the traffic in my inbox and on #haskell about it, it is mostly the old guard who gets disgruntled by lens. So let me try and explain why that is. I’ll go ahead and say this: lens is unidiomatic Haskell. By which I mean that lens isn’t like any other library that we normally use. It doesn’t follow the conventions of Haskell APIs, on which I elaborate below. Now let me clarify that this doesn’t necessarily mean that lens is a bad library. It’s an unusual library. It’s almost a separate language, with its own idioms, embedded in Haskell. It is as unusual as, say, Yesod is. But unlike Yesod, which follows Haskell’s spirit, not letter (syntax), lens, I argue, follows Haskell’s letter, but not its spirit. So here’s why I think lens violates the spirit of Haskell: In Haskell, types provide a pretty good explanation of what a function does. Good luck deciphering lens types. Here’s a random function signature I picked from lens: below :: Traversable f => APrism' s a -> Prism' (f s) (f a) Despite having some (fairly basic) understanding of what prisms are, this signature tells me nothing at all, even despite the usage of type synonyms. So you have to rely on documentation much more that on types. Yeah, just like in Ruby. Usually, types in Haskell are rigid. This leads to a distinctive style of composing programs: look at the types and see what fits where. This is impossible with lens, which takes overloading to the level mainstream Haskell progably hasn’t seen before. We have to learn the new language of the lens combinators and how to compose them, instead of enjoying our knowledge of how to compose Haskell functions. Formally, lens types are Haskell function types, but while with ordinary Haskell functions you immediately see from types whether they can be composed, with lens functions this is very hard in practice. The size of the lens API is comparable to the size of what I’d call «core Haskell» (i.e. more or less the base library). It is also similar in spirit to base: it has a big number of trivial combinations of basic functions, in order to create a base vocabulary in which bigger programs are expressed. Ordinary libraries, instead, give only basic functions/combinators, and rely on the vocabulary provided by base (or lens) to compose them together. This is why I regard lens as a language in its own. And this demonstrates why learning lens is hard: surely learning a new language is harder than learning a new library. Dependencies. A library as basic in its purpose as lens ideally should have almost no dependencies at all. Instead, other libraries should depend on it and implement its interface. (Or even do it without depending on it, as is possible with lens.) A package implementing lenses that depends on a JSON parsing library and a gzip compression library sounds almost like a joke to me. OTOH, it kind of makes sense if you think about lens as a language. It just ships with a “rich standard library”. Nice! Backward composition of lenses. It’s a minor issue, and I wouldn’t mention it if it wasn’t a great demonstration of how lens goes against the conventions of Haskell. Note that I’m not trying to make a judgment here (although my tone probably does give away my attitude towards lens). I’m simply explaining why people may dislike and resist it. Nor am I trying to argue against any particular design decision of lens. I’m sure they all have valid rationale behind them. I just hope that someone will write an idiomatic Haskell library as powerful as (or close to) lens, with perhaps a different set of compromises made. Otherwise, I’m afraid we all will have to learn this new language sooner or later. 2014-04-23T21:00:00Zhttp://ro-che.info/articles/2014-04-24-lens-unidiomatic.html

'T' is for Theorem-proving

Planet Haskell - 16 hours 31 min ago
'T' is for Theorem-provingSo, you've been theorem-proving all your life.Let's just get that one out there.When you're given a math problem that says:"Solve x + 1 = 2 (for x) (of course)"And you say, "Why, x is 1, of course," quick as thinking, you've just proved a theorem.Here, let me show you:x + 1 = 2 (basis)     -1 = -1 (reflexive)x + 0 = 1 (addition)x       = 1 (identity)Q.E.D. ('cause you rock)So, yeah. Theorem proving is this, correction: theorem proving is simply this: going from step 1, to step 2 to step 3 until you've got to where you want to be.How do you go from step 1 to step 2, etc? The same way you do everything! You follow the rules you're given.Let's prove a theorem, right now.So, in classical logic, we have a theorem that saysp → pThat is, if you've got a p, that implies (that you've got a) p.Toughie!But proving that? How do we go about doing that?Well, in classical logic, you're given three basic axioms (thanks to sigfpe for his article "Adventures in Classic Land"):1. p → (q → p)2. (p → (q → r)) → ((p → q) → (p → r))3. ¬¬p → pSo, p → p. How do we get there? Let's start with axiom 1 above.1. p → ((p → p) → p) axiom 1 (where q = (p → p))2. (p → ((p → p) → p) → ((p → (p → p)) → (p → p)) axiom 2 (where q = (p → p) and r = p)3. (p → (p → p)) → (p → p) modus ponens4. p → (p → p) axiom 1 (where q = p)5. p → p modus ponensQ.E.D. (a.k.a. whew!)I used something called modus ponens in the above proof. It says, basically, that if you've proved something that you're depending on, you can drop it. Or, logically:p → q, p       ∴ qOr, we have "p implies q" we've proved p, so now we can just use q.Now, there's been a lot of thought put into theory, coming up with theorems and proving them, and this has been over a long time, from Aristotle up to now. The big question for a long time was that ...Well, theorem proving is a lot of hard work. Is there any way to automate it?So there's been this quest to make theorem proving mechanical.But to make theorem-proving mechanical, you have to mechanize everything, the axioms, the rules, and the process. Up until recent history, theory has been a lot of great minds spouting truths, and 'oh, here's a new way to look at the world!'And that has been great (it has), but it hasn't helped mechanize things.Then along came Frege. What Frege did was to give us a set of symbols that represented logical connectives and then symbols that represented things.And there you had it: when you have objects and their relations, you have an ontology, a knowledge-base. Frege provided the tools to represent knowledge as discreet things that could be manipulated by following the rules of the (symbolized) relations.He gave abstraction to knowledge and an uniform way of manipulating those abstractions, so, regardless of the underlying knowledge be it about money or chickens or knowledge, itself, it could be represented and then manipulated.That way you could go from step 1 to step 2 to step 3, etc, until you arrived at your conclusion, or, just as importantly, arrived at a conclusion (including one that might possibly say what you were trying to prove was inadmissible).Since Frege, there has been (a lot of) refinement to his system, but we've been using his system since because it works so well. He was the one who came up with the concept of types ('T' is for Types) and from that we've improved logic to be able to deliver this blog post to you on a laptop that is, at base, a pile of sand that constantly proving theorems in a descendent of Frege's logic.Let's take a look at one such mechanized system. It's a Logic Framework, and is called tweLF. The first example proof from the Princeton team that uses this system is 'implies true,' and it goes like this:imp_true: pf B -> pf (A imp B) = ...That's the declaration. It's saying: the proof of B, or pf B, implies the proof of A implies B, or pf (A imp B).How can you claim such a thing?Well, you prove it.imp_true: pf B -> pf (A imp B) = [p1: pf B]            % you have the proof of B % ... but now what? for we don't have the proof that A implies B.So the 'now what' is our theorem-proving adventure. Mechanized.We don't have the proof that A implies B, but we have a logical loop-hole (called a hole) that a proof some something that's true is its proof:hole: {C} pf C.Which says, mechanized what I just said in words, so with that:imp_true: pf B -> pf (A imp B) = [p1: pf B] (hole (A imp B)).And there you go.... that is, if you're fine with a big, gaping loophole in your proof of such a thing.If you are satisfied, then, well, here, sit on this rusty nail until you get unsatisfied.So there.Okay, so we have an implication, but we need to prove that.So we introduce the implication into the proof, which is defined as:imp_i: (pf A -> pf B) -> pf (A imp B)SO! we have that, and can use it:imp_true: pf B -> pf (A imp B) = [p1 : pf B] (imp_i ([p2: pf A] (hole B))).So, we have the proof of A and that leads to B. But wait! We already have B, don't we? It's p1 (that is [p1 : pf B])BAM!imp_true: pf B -> pf (A imp B) = [p1 : pf B] (imp_i ([p2 : pf A] p1)).DONE!This is what theorem-proving is: you start with what you want, e.g.:imp_true: pf B -> pf (A imp B)which is "implies-true is that if you have B proved, then you have the proof that (A implies B) is true, too."Then you take what you've got:pf BAnd give it a value:[p1 : pf B]Then you apply your rules (in this case, implication introduction, or 'imp_i') to fill the holes in your proof, to get:[p1: pf B] (imp_i [p2: pf A] p1)Which is the demonstration of your proof, and that's why we say 'Q.E.D.' or 'quod est demonstratum.' 'Thus it is demonstrated.'Ta-dah! Now you have all the tools you need to prove theorems.Now go prove Fermat's Last Theorem.eheh.(I am so mean!)Okay, okay, so maybe Fermat's Last Theorem is a bit much, but if you want to try your hand at proving some theorems, there's a list of some simple theorems to prove.

'S' is for Simply Summing

Planet Haskell - 16 hours 31 min ago
'S' is for ... simply summing.Okay, this post was going to be about symmetries and super-symmetries, then I was going to transition to sieves, like the Sieve of Eratosthenes, and how that relates to fizz-buzz, ...But then I thought: 'eh.'Yeah, that's what I thought: 'eh.'I'm a moody brute, filled with deep, indiscernible thoughts, 'cause that's how I roll.So, yeah: eh. This one will be a simple post today.Sums.Sum the numbers 1.Yeah, I just said that; humor me.Okay, the sum of 1 is 1. Toughie.Next, sum 1 + 2.3, right? Still with me?Okay, 1 + 2 + 3.Got it? 6. No probs.Now: sum 1 to 10. No laptop help, just sum it yourself.A little more fun, right? So, that's 55.Okay, no computers: sum 1 to 100.Hm, hm, hm. Done yet?La-di-dah. You're not even trying, are you? You're just reading this post.Okay, the sum from 1 to 100, inclusive, no computers is ... 5,050.1 to 1000: 500,500.Pattern? You bet.1 to ... oh, I don't know: 123:That's a might harder, but, no computers:62 00 + 12 4 0 + 18 6 = 7,626.Okay, let's verify, by asking my Haskell interpreter: Prelude> sum [1 .. 123]7626Yeppers, I was right. I was also as fast as a person typing a program into their computer to solve it, if they were using a functional programming language, and much, much faster than a person using a computer if they weren't and called up Excel, for example (loading ... loading ... loading ... blue screen of death) (You really need to virus-clean your computer, because I see you don't have a Mac, do you, tsk, tsk!) or if they pulled out their HP calculator from their high school days because they are that old, boys and girls!You see this thing in my hand, children? This is not a smart phone, it's a calculator.Children: Ooh! Papa, can I borrow it for a sec ...Children, borrowing it, the puzzling over it: Um, Dad, ... where's the Plants vs. Zombies 2 app?Yeah, a calculator, it calculates. And that's it. No google, no nothing, just numbers and numeric operations.Children, looking uncomprehendingly at the thing in their hands, then, tossing the cootied thing from their hands and run to their rooms, screaming, remembering with horror the time Dear Old Dad got out an LP and told them that music came out of it, but ... it didn't have playlists!Yeah. Calculator. But would your calculator get you that sum that fast? I mean, after you go to the CVS to buy a new battery. Those things are long-lasting, but thirty years later? And you expect your calculator to still just work?Pfft! Heh.Pick a number (natural number), any number, I'll sum it for you, from the origin.Sum 1 to 31415.Okay31415 * 15708 = 314,15 0,000 + 157,075, 000 + 21,990,5 00 + 0 + 251,320 = 493,466,820Okay, that took a bit longer, let's see what Haskell says:Prelude> sum [1..31415]493466820Okay. Damn! ... Sometimes I impress even myself.How do I do this? How do I add all the numbers from 1 to 31,415 and in under a minute? And perfectly, perfectly, correctly?Well, I'll tell you.But first, I'll tell you a story.Once upon a time, there was a little boy named Gauss, and he was a naughty, little, precocious lad, always getting into trouble for dunking Susie Derkins' pigtails in the inkwell at his desk, and that would be fine if she had jet black hair, but you know the Viking types, right? All blond-haired, blue-eyed things, and getting your honey-blond long-flowing locks dunked into ink was not on the agenda for poor, little Susie, who cried, and had to be sent home to console herself with an appointment at the beautician's who thought the Goth-look actually fit Susie well, so she came back the next day as Suze-in-black-leather-and-studs which was disruptive for the class in other ways, so Gauss found himself at detention again.But this time the math teacher had had enough of little Gauss' pronouncements and recitations of π to sixty-seven places, and decided to teach the little scamp but good.Not that I'm channeling, or anything. It's not 'Mary Sue'-writing; it's 'walking in your character's moccasins a mile' ... even though Gauss probably didn't wear moccasins all that often, anyway.Still with me?So, mean-old Herr Schopenhauer told little Gauss. "Okay, twerp, this is gonna be a light one on you. You can go home after you sum the number from one to one hundred."Huh? One to one hundred? But that will take all night!"That's right," said mean old Herr Bergermeister Meisterburger, "so you'd better get crackin'!" And he cackled evilly with Schadenfreude.The Head-Master had several names, don't you know, ... and a peridiction for Schadenfreude with his afternoon tea and muffin.So, what could little Gauss do? He got cracking, the poor lamb.1 = 11 + 2 = 31 + 2 + 3 = 61 + 2 + 3 + 4 = 10 ... obviously, you just add the last number to the previous sum1 + 2 + 3 + 4 + 5 = 15 ... now, wait a minute ...1 + 2 + 3 + 4 + 5 + 6 = 21 ... there seems to be another pattern here ...1 + 2 + 3 + 4 + 5 + 6 + 7 = 28 ... Gauss looked at the chalkboard, then ...Got it! Little Gauss thought.Then he wrote:1 + 2 + 3 + ... + 98 + 99 + 100 = 5,050."G'nite, Teach!" Gauss crowed, and off he skipped home with Suze, where they shared a chocolate milk and a strawberry poptart ... before all that sugar glaze had been added to the crust (shudder).And Herr Herren HerringSchönheit was left stuttering a "Vat? Vat? Git back here, you rapscallion!"Okay, what's a rap-scallion? Is it a thug onion with 'tude, a glock, and a triple-platinum pedigree?But all Herr whatz-his-name could do was sputter, because little Gauss got it.The sum from one (zero, actually) to any number reduces to a simple formula:sum [1..n] = n * (n + 1) / 2Either n or n + 1 is even, so one of them is divisible by two, so it works.Try it out:1 = 1 * (1 + 1) / 2 = 1 * 2 / 2 = 11 + 2 = 2 * (2 + 1) / 2 = 2 * 3 / 2 = 31 + 2 + 3 = 3 * (3 + 1) / 2 = 3 * 4 / 2 = 3 * 2 = 6... so obviously:1 + 2 + 3 + ... + 31,415 = 31,415 * (31,415 + 1) / 2 = 31,415 * 15,708 = that big number I computed earlier. Yeah, almost five-hundred million!Put that into your "ceci-n'est-pas-unu-pipe" and smoke it. I'd buy that for a dollar.Now, there's something you can impress your friends at Happy Hour with.EpilogueOkay, so you can sum 1 to ... whatevs, but what if a friend asks you, very kindly, and very politely, 'Okay, Ms. Smartypants, how about summing 1,000 to 2,000? What's that, huh? Betcha can't do that! Huh? Amirite? Amirite? Lookacha sweating bullets now, arencha? So, well, what is it? Huh?"Yeah, really politely. Like that.Now you could say, "But-but-but, Gauss' summer[-function, not -time] doesn't work like that."But then you'd have feet of clay and egg on your face.And with this hot summer coming, you don't want egg on your face. Trust me on that one.So. What to do?Think about it, that's what.You simply make your 1,000 your zero, by scaling each number back by 1,000, then get your sum, then scale back each number by 1,000. For each time you applied the scale, you apply the scale to the result.So, the sum of 1,000 to 2,000 inclusive is:1,000 + 1,001 + 1,002 + ... 1,998 + 1,999 + 2,000 =scaled back is0 + 1 + 2 + ... + 998 + 999 + 1000 = 1000 * (1000 + 1) / 2 = 500,500Now scale each number forward again, by adding 1,000 back to each number. You did that 1,001 times, right? (Don't forget to count the zero.) That's a rescalation (that's a word now) of 1,001,000. Add that to your sum to get 1,501,500.There's your answer.Let's check:Prelude> sum [1000..2000]1501500Bingo! Tickets! This must be the Front Row!And you, very politely, just told your friend where they could put their superior 'tude, too.Win-win! Good times; good times!See y'all tomorrow. By the way, what's the number of moves needed to solve the Towers of Hanoi? Not that 'Towers of Hanoi' is going to be my topic for 'T.'Not at all.

How to Fake Dynamic Binding in Nix

Planet Haskell - 16 hours 31 min ago
The Nix language is a purely functional, lazy, statically scoped configuration language that is commonly used to specify software package configurations, OS system configurations, distributed system configurations, and cloud system configurations. Static scoping means that variables are statically bound; all variable references are resolved based on their scope at declaration time. For example, if we declare a set with recursive bindings, ❄> let a = rec { x = "abc"; x2 = x + "123"; }; in a { x = "abc"; x2 = "abc123"; } then the use of x in the definition of x2 is resolved to "abc". Even if we later update the definition of x, the definition of x2 will not change. ❄> let a = rec { x = "abc"; x2 = x + "123"; }; in a // { x = "def"; } { x = "def"; x2 = "abc123"; } Generally speaking, static binding is a good thing. I find that languages with static scoping are easy to understand because variable references can be followed in the source code. Static scoping lets Nix be referentially transparent, so, modulo α-renaming, you can always substitute expressions by their (partially) normalized values. In the previous example, we can replace the expression rec { x = "abc"; x2 = x + "123"; } with { x = "abc"; x2 = "abc123"; } and the result of the program does not change. ❄> let a = { x = "abc"; x2 = "abc123"; }; in a // { x = "def"; } { x = "def"; x2 = "abc123"; } That said, on some occasions it would be nice to have some dynamic binding. Dynamic binding is used in Nixpkgs to allow users to override libraries with alternate versions in such a way that all other software packages that depend on it pick up the replacement version. For example, we might have the following in our nixpkgs declaration … boost149 = callPackage ../development/libraries/boost/1.49.nix { }; boost155 = callPackage ../development/libraries/boost/1.55.nix { }; boost = boost155; … and perhaps we want to make boost149 the default boost version to work around some regression. If we write nixpkgs // { boost = boost149; } then we only update the boost field of the nix package collection and none of the packages depending on boost will change. Instead we need to use the config.packageOverrides to change boost in such a way that all expressions depending on boost are also updated. Our goal is to understand the technique that packageOverrides and other similar overrides employ to achieve this sort of dynamic binding in a statically scoped language such as Nix. This same technique is also used to give semantics to object-oriented languages. First we need to review normal recursive bindings. The rec operator is can almost be defined as a function in Nix itself by taking a fixed point. Recall that in Nix lambda expressions are written as x: expr. ❄> let fix = f: let fixpoint = f fixpoint; in fixpoint; in let a = fix (self: { x = "abc"; x2 = self.x + "123"; }); in a { x = "abc"; x2 = "abc123"; } The function self: { x = "abc"; x2 = self.x + "123"; } is an object written in the open recursive style. By taking the fixpoint of this function, the recursion is closed yielding the desired value. Written this way, we had to prefix the recursive references to x with self.. However using Nix’s with operator, we can bring the values of self into scope allowing us to drop the prefix. ❄> let fix = f: let fixpoint = f fixpoint; in fixpoint; in let a = fix (self: with self; { x = "abc"; x2 = x + "123"; }); in a { x = "abc"; x2 = "abc123"; } This is pretty close to a definition of rec. We can almost think of rec { bindings } as syntactic sugar for fix (self: with self; { bindings }). In order to override the definition of x instead up updating it, we need to intercept the definition of x before the open recursion is closed. To this end, we write a fixWithOverride function that takes a set of overriding bindings and an open recursive object and applies the override bindings before closing the recursion. ❄> let fix = f: let fixpoint = f fixpoint; in fixpoint; withOverride = overrides: f: self: f self // overrides; fixWithOverride = overrides: f: fix (withOverride overrides f); in let open_a = self: with self; { x = "abc"; x2 = x + "123"; }; in fixWithOverride { x = "def"; } open_a { x = "def"; x2 = "def123"; } Success! We have manage to override the definition of x and get the definition of x2 updated automatically to reflect the new value of x. Let us step through this code to see how it works. First we defined open_a to be the same as our previous definition of a, but written as an open recursive object. The expression fixWithOverride { x = "def"; } open_a reduces to fix (withOverride { x = "def"; } open_a). What the withOverride function does is takes an open recursive object and returns a new open recursive object with updated bindings. In particular, withOverride { x = "def"; } open_a reduces toself: (with self; { x = "abc"; x2 = x + "123"; }) // { x = "def"; } which in turn reduces to self: { x = "def"; x2 = self.x + "123"; }. Finally, fix takes the fixpoint of this updated open recursive object to get the closed value { x = "def"; x2 = "def123"; }. This is great; however, we do not want to have to work with open recursive objects everywhere. Instead, what we can do is build a closed recursive value, but tack on an extra field named _override that provides access to withOverride applied to the open recursive object. This will allow us to perform both updates and overrides at our discretion. ❄> let fix = f: let fixpoint = f fixpoint; in fixpoint; withOverride = overrides: f: self: f self // overrides; virtual = f: fix f // { _override = overrides: virtual (withOverride overrides f); }; in let a = virtual (self: with self; { x = "abc"; x2 = x + "123"; }); in rec { example1 = a; example2 = a // { x = "def"; }; example3 = a._override { x = "def"; }; example4 = example3._override { y = true; }; example5 = example4._override { x = "ghi"; }; } { example1 = { _override = ; x = "abc"; x2 = "abc123"; }; example2 = { _override = ; x = "def"; x2 = "abc123"; }; example3 = { _override = ; x = "def"; x2 = "def123"; }; example4 = { _override = ; x = "def"; x2 = "def123"; y = true; }; example5 = { _override = ; x = "ghi"; x2 = "ghi123"; y = true; }; } One remaining problem with the above definition of virtual is that we cannot override the method x2 and still get access to x. ❄> let fix = f: let fixpoint = f fixpoint; in fixpoint; withOverride = overrides: f: self: f self // overrides; virtual = f: fix f // { _override = overrides: virtual (withOverride overrides f); }; in let a = virtual (self: with self; { x = "abc"; x2 = x + "123"; }); in a._override { x2 = x + "456"; } error: undefined variable `x' at `(string):5:23' Remembering that Nix is statically scoped, we see that the variable x in a._override { x2 = x + "456"; } is a dangling reference and does not refer to anything in lexical scope. To remedy this, we allow the overrides parameter to withOverride to optionally take a open recursive object rather than necessarily a set of bindings. ❄> let fix = f: let fixpoint = f fixpoint; in fixpoint; withOverride = overrides: f: self: f self // (if builtins.isFunction overrides then overrides self else overrides); virtual = f: fix f // { _override = overrides: virtual (withOverride overrides f); }; in let a = virtual (self: with self; { x = "abc"; x2 = x + "123"; }); in rec { example6 = a._override (self: with self; { x2 = x + "456"; }); example7 = example6._override { x = "def"; }; } { example6 = { _override = ; x = "abc"; x2 = "abc456"; }; example7 = { _override = ; x = "def"; x2 = "def456"; }; } This illustrates is the basic technique that packageOverrides and other similar overrides use. The packageOverrides code is a bit more complex because there are multiple points of entry into the package override system, but the above is the essential idea behind it. The makeOverridable function from customisation.nix creates an override field similar to my _override field above, but overrides function arguments rather than overriding recursive bindings. The syntax virtual (self: with self; { bindings }) is a little heavy. One could imagine adding a virtual keyword to Nix, similar to rec, so that virtual { bindings } would denote this expression. After writing all this I am not certain my title is correct. I called this faking dynamic binding, but I think one could argue that this is actually real dynamic binding.

'F' is for function

Planet Haskell - 16 hours 31 min ago
'F' is for function.Okay, you just knew we were going there today, didn't you?Today, and ... well, every day in the last 2,600 years (I'm not joking) has had this dynamic tension between objects and functions. Which is greater?Read this tell-all post to find out!Sorry, I channeled Carl Sagan in Cosmos for a second. Not really.But I digress.WAY back when, back in the olden times, there were just things, and then when you had a full belly and your one thing, your neighbor, who happened to be named 'Jones,' had two things.And then keeping up with the Jones' came to be a real problem, because you had one thing, and they had two. So you got one more.And counting was invented.For things.In fact, in some cultures, counting doesn't exist as an abstract thing. You don't count, in general, you have different counting systems for different things. Counting, for example, pens, is a very different thing than counting eggs.Ha-ha, so quaint! Who would ever do that?Well, you do. What time is it? You have a different way for counting time than you do for counting distance or for counting groceries that you need to buy, don't you?But, at base, they are all just things: time, distance, and gallons of milk.Sorry: litres of milk. How very British Imperial of me!Question: why are Americans just about the only ones using the British Imperial system any more? That is, the real question is: why are the British not using the British Imperial system? They onto something we're not?Anybody want a satellite coded wrongly because some engineers intermingled British Imperial units with metrics?Okay, so, units, things, metrics, whatevs.And counting.Counting is doing something to something, whereas, at first, it's important you have that chicken in your stomach (cooked is nice, but optional), suddenly it's become important that you have not one chicken, a hen, but a (very rarely) occasional rooster with all your hens would be nice, too.(Roosters are big jerks, by the way.)Counting was all we had, after the chickens and their yummy eggs, that is, and that for a long while.Question: which came first, the chicken or the egg.Answer: that's obvious, the egg.Think about it. You have eggs for breakfast, then you have a chicken-salad sandwich.Like I said: obvious! First the egg, then the chicken.Unless you're one of those weirdos that eat chicken sausage with your breakfast, then it's a toss-up, but in that case I have no help for you.Okay, chickens, eggs (egg came first) (You read that first, right here on this blog) (and you thought there was no point to math!), and then counting, for a long, long time.Then, ... counting became too much, because you ran out of fingers, then toes, to count your mutton on. It happens. That's a good thing, because then people started to have to think (that's a good thing, too), and they thought, 'how do I count all my mutton when I run out of fingers and toes? I need to know that I have more sheep than the Jones', and I've paired each of my sheep to his, and I have more than twenty-two than he does. How many more do I have!"Then people started getting smart. They did more than one-to-one, or pairwise, groupings, they started grouping numbers, themselves.All this is very nice background, geophf, but to what end?First counting, then grouping, those are operations, and on or using numbers.Then addition and subtraction, which are abstractions, or generalizations on counting, then multiplication, and let's not even go into division.But.Counting, adding, multiplying. These things worked for you chickens (and eggs) (first), but also on your mutton, and when you settled down, it worked on how many feet, yards, the hectares of land you had, too. The same principles applied: numbers worked on things, no matter what the things were, and the operations on numbers worked on ... well, no matter what the numbers counted.Now. Today.Or, sometime back in the 40s anyway, way back before the Birkenstock-wearers were mellowing out with lattes, anyway. A couple of dudes (Samuel Eilenberg and Saunders Mac Lane, specifically) said, 'Hey, wait!' they exclaimed, 'it doesn't matter what the thing is, at all!' they said, 'as long as the things are all the same thing, we don't care all all about the thing itself at all!'And they invented a little something that eventually became known as Category Theory, which is the basis of ... well, a lot of things now: quantum thermodynamics, for one, and computer science, for another.And they nailed it. They created a one-to-one correspondence between the things in categories and the things in Set theory.Why is that important? Set theory is this:{}Got me?A set is a thing that contains a thing. The fundamental set is the set that contains nothing (the 'empty set'), which is this:{}And other sets are sets that contain sets:{{}, {{}, {{}, {}}}}With sets you can model numbers if you'd like:0: {},1 {{}} (the set that contains '0'), 2: {{}, {{}}} (the set that contains '0' and '1')3: {{}, {{}}, {{}, {{}}}} (the set that contains '0', '1', and '2')etc...And as Gödel (eventually) showed, once you model numbers, you can model anything.(We'll get to that ... for 'G' is for Gödel-numbering, I'm thinking)How? Well, once you have numbers, you can count with them (as just demonstrated), you can add one to them (the 'successor' function), you can take away one from the (the 'predecessor' function), you can add two of them together (successively add one to the first number until the second number is zero. Try it!), subtract, by doing the opposite (or by just removing duplicates from both until one of them is gone ... 'pairwise-subtraction'!), multiply two together (successive addition), divide numbers ... (um, yeah)... and, well, do anything with them.Once you have Number, you have counting, and from that, you have everything else.So, our dudes mapped category theory things (morphisms) down to Set mapping functions, and they had it.Because why?Well, it's rather cumbersome to carry a whole bunch of sets for your Pert formulation, especially when you want to see the money you (don't) save by paying off your mortgage's principal early.Banker: oh, geophf, pay off your mortgage principal early, you'll save a ton in the long run.me: uh, that was true when mortgage interest was 10-15-21%...(you remember those Jimmy Carter days?)me, continuing: ... but now that my interest rate is 2.5% I save zip after 30 years for early payment.Try it. Run the numbers. Bankers haven't. They're just repeating what was good advice, ... twenty-five years ago.But when you have objects, whose representations do not matter, be they 0, 1, a googol, or {{}, {{}}, {{}, {{}}}}, then you can concentrate on what's really important.The functions.Whoopsie! I slipped and gave the answer, didn't I?Me, and Alan Kay, both.Alan Kay, inventor of Smalltalk, and inspirer of the movie Tron: The objects aren't important, it's the messages that go between them that is.So there.Well, okay, functions win. It used to be the chicken (or the egg), but now we've ... grown a bit, worrying less about what fills our stomach, and where we're going to get our next meal (and not be the next meal), you know: survival.We've transcended our stomachs.Maybe.Well, okay, functions rule, so back off.The dudes didn't stop there, because there's been some really neat things going on with functions, since discovering functions are things in and of themselves, because if functions are things, in and of themselves, then, well, ...Well, you can number them (tomorrow) (promise), you can count them, you can add them, you can multiply them, you can exponate them, you can ...You can, in fact, operate on them, applying functions to functions.So, our dudes did just that. They said their functions, their morphisms have two rules to be a function.You can ask what it is, it's identity:id f = fAnd you can string them together, composing them:g . f = ... well, g . fSo, if you have > f x = succ xand > g x = x * 2then (g . f) gives you some function, we can call it, h, here if you'd like, and h is the composition of applying f to your number first, then g, or> h x = 2 (x + 1)Try it!Pull up your functional programming language listener (I use Haskell for now) (until I find or invent something better)and enter the above.Now, it's an interesting proposition to show that(g . f) == hBut, for now, in Haskell, we cannot do that.Now, in some other programming languages (I'm thinking of my experience with Twelf), you can, but first you have to provide numbers, on you own, like: 0, 1, 2, ...... and prove they are what they are, you know: they follow in order, and stuff like that ...Then you provide the functions for identity and composition, and then you can prove the above statement, because now you have theorems and a theorem prover.Yay!No, ... no 'yay'!I mean, 'yay,' if that's your thing, proving theorems (I happen to have fun doing that at times), but 'ick,' otherwise, because it's a lot of work to get to things like just plain old addition.See, theorem-prover-folk are these hard-core fighters, right on the edge of discovering new mathematics and science. Not plain-old-little me, I just use the stuff (to build new theories from preexisting, proved, ones), so I'm a lover of the stuff.To quote an eminent mathematical philosopher: I'm a lover, not a fighter.And, no, Billie-Jean is not my lover. She's just a girl that claims that I am the one.Anyway!So, but anyway, using the above two functions, identity and composition, you're now able to reason about functions generally, making new functions by stringing together (composing) other, preexisting functions.This is so important that Eilenberg and Mac Lane gave this process it's own name 'natural transformation.'But okay, Categories have units and you don't particularly care what those units are; we saw one set of units, numbers, themselves, as an example,But units can be anything.Even functions, so you have a category of functions (several categories, in fact, as there are different kinds of functions, just so you know) ... You can even have categories of categories of things (things being some arbitary unitary types, like numbers, or functions, or categories, again ... it can get as wild and woolly as you want it to get).And types. Categories can be used to model types, so you can have a category that represents the natural numbers ... as types:Nat : Categoryzero : 1 -> Natsucc : Nat -> Natso zero = zero ()one = succ zero ()two = succ onethree = succ twoetc ...And those formula, those functions works, regardless of the underlying type of things like 'number' or even 'function.'Category theory was originally called 'Abstract nonsense,' by the way, because Eilenberg and Mac Lane saw it as so abstract that they were actually talking about nothing.Sort of like how Set Theory does, starting from nothing, the empty set, and building everything from that object, but even more abstractly than that, because category theory doesn't even have the 'nothing'-object to start from, it just starts identifying and composing functions against objects about which it doesn't care what they are.Now, there was, at one point, a working definition of abstraction, and that was: the more useful a theorem is, the less abstract it is.The contrapositive implied: the more abstract a formula is, the less useful.But I find I play in these domains very happily, and in playing, I'm able to invent stuff, useful stuff that's able to do things, in software, that other software engineers struggle with and even give up, stating: "This can't be done."I love it when I'm told I "can't" do something.You know what I hear when I hear the word "can't"?I hear opportunity. Everyone else may have given up on this particular task, because it 'can't' be done. But me? I go right in with my little abstract nonsense, reinventing the language that was unable to frame the problem before with a language that can, and I find, hey, I not only can do it, but I just did.Language frames our thoughts, telling us what we can and cannot do. English is a very complex language, taking in a lot of the world and describing it just so.The language of mathematics is very, very simple, and describes a very tiny world, a world that is so tiny, in fact, doesn't exist in reality. It's a quantum-sized world: mathematics and has the possibility to be even smaller than that.But so what? I get to invent a world that doesn't exist, each time I apply a theorem from mathematics that others are unfamiliar with.And when I do so, I get to create a world where the impossible is possible.It's very liberating, being able to do things others 'can't.' And when I pull it off, not only do I win, but my customers win, and maybe even those developers who 'can't' win, too, if they're willing to open their eyes to see possibilities they didn't see before.'F' is for functions, a totally different way of seeing the world, not as a world of objects, but as a world of possibilities to explore.

Typeable and Data in Haskell

Planet Haskell - 16 hours 31 min ago
Data.Typeable and Data.Data are rather mysterious. Starting out as a Haskell newbie you see them once in a while and wonder what use they are. Their Haddock pages are pretty opaque and scary in places. Here’s a quick rundown I thought I’d write to get people up to speed nice and quick so that they can start using it.1 It’s really rather beautiful as a way to do generic programming in Haskell. The general approach is that you don’t know what data types are being given to you, but you want to work upon them almost as if you did. The technique is simple when broken down. Requirements First, there is a class exported by each module. The class Typeable and the class Data. Your data types have to be instances of these if you want to use the generic programming methods on them. Happily, we don’t have to write these instances ourselves (and in GHC 7.8 it is actually not possible to do so): GHC provides the extension DeriveDataTypeable, which you can enable by adding {-# LANGUAGE DeriveDataTypeable #-} to the top of your file, or providing -XDeriveDataTypeable to ghc. Now you can derive instances of both: data X = X deriving (Data,Typeable) Now we can start doing generic operations upon X. The Typeable class As a simple starter, we can trivially print the type of any instance of Typeable. What are some existing instances of Typeable? Let’s ask GHCi: λ> :i Typeable class Typeable a where typeOf :: a -> TypeRep instance [overlap ok] (Typeable1 s, Typeable a) => Typeable (s a) instance [overlap ok] Typeable TypeRep instance [overlap ok] Typeable TyCon instance [overlap ok] Typeable Ordering instance [overlap ok] Typeable Integer instance [overlap ok] Typeable Int instance [overlap ok] Typeable Float instance [overlap ok] Typeable Double instance [overlap ok] Typeable Char instance [overlap ok] Typeable Bool instance [overlap ok] Typeable () That’s the basic Prelude types and the Typeable library’s own types. There’s only one method in the Typeable class: typeOf :: a -> TypeRep The TypeRep value has some useful normal instances: λ> :i TypeRep instance [overlap ok] Eq TypeRep instance [overlap ok] Ord TypeRep instance [overlap ok] Show TypeRep instance [overlap ok] Typeable TypeRep Use-case 1: Print the type of something So we can use this function on a Char value, for example, and GHCi can print it: λ> :t typeOf 'a' typeOf 'a' :: TypeRep λ> typeOf 'a' Char This is mostly useful for debugging, but can also be useful when writing generic encoders or any tool which needs an identifier to be associated with some generic value. Use-case 2: Compare the types of two things We can also compare two type representations: λ> typeOf 'a' == typeOf 'b' True λ> typeOf 'a' == typeOf () False Any code which needs to allow any old type to be passed into it, but which has some interest in sometimes enforcing or triggering on a specific type can use this to compare them. Use-case 3: Reifying from generic to concrete A common thing to need to do is when given a generic value, is to sometimes, if the type is right, actually work with the value as the concrete type, not a polymorphic type. For example, a printing function: char :: Typeable a => a -> String The specification for this function is: if given an Char, return its string representation, otherwise, return "unknown". To do this, we need a function that will convert from a polymorphic value to a concrete one: cast :: (Typeable a, Typeable b) => a -> Maybe b This function from Data.Typeable will do just that. Now we can implement char: λ> let char x = case cast x of Just (x :: Char) -> show x Nothing -> "unknown" λ> char 'a' "'a'" λ> char 5 "unknown" λ> char () "unknown" The Data class That’s more or less where the interesting practical applications of the Typeable class ends. But it becomes more interesting once you have that, the Data class can take advantage of it. The Data class is much more interesting. The point is to be able to look into a data type’s constructors, its fields and traverse across or fold over them. Let’s take a look at the class. Again, there are some basic instances provided: instance Data a => Data [a] instance Data Ordering instance Data a => Data (Maybe a) instance Data Integer instance Data Int instance Data Float instance (Data a, Data b) => Data (Either a b) instance Data Double instance Data Char instance Data Bool It’s a rather big class, so I’ll just cover some methods that demonstrate the key use-cases. Use-case 1: Get the data type Similar to the TypeRep, you can use dataTypeOf to get a unique representation of a data type: dataTypeOf :: Data a => a -> DataType For example: λ> dataTypeOf (Just 'a') DataType {tycon = "Prelude.Maybe", datarep = AlgRep [Nothing,Just]} There aren’t any other interesting instances for this type, but we’ll look at uses for this type later. Representations (so-called FooRep) tend to be references from which you can reify into more concrete values. Use-case 2: Inspecting a data type The most common thing to want to do is to get a list of constructors that a type contains. So, the Maybe type contains two. λ> :t dataTypeConstrs dataTypeConstrs :: DataType -> [Constr] λ> dataTypeConstrs (dataTypeOf (Nothing :: Maybe ())) [Nothing,Just] We’ll look at what we can do with constructors later. It’s also surprisingly common to want to see what the constructor is at a particular index. We could write this function ourself, but there’s already one provided: λ> indexConstr (dataTypeOf (Nothing :: Maybe ())) 2 Just Sometimes you want to know whether a data type is algebraic (in other words, does it have constructors and is it not one of the built-in types like Int/Float/etc)? λ> isAlgType (dataTypeOf (Just 'a')) True λ> isAlgType (dataTypeOf 'a') False Use-case 3: Get the constructor of a value We have the method toConstr :: a -> Constr Which given any instance of Data will yield a constructor. λ> :i Constr data Constr instance Eq Constr instance Show Constr You can’t do much with a constructor as-is, but compare and print it: λ> toConstr (Just 'a') Just λ> toConstr (Just 'a') == toConstr (Nothing :: Maybe Char) False However, those operations by themselves can be useful. By the way, we can also get back the DataRep of a constructor: λ> constrType (toConstr (Just 'a')) DataType {tycon = "Prelude.Maybe", datarep = AlgRep [Nothing,Just]} Use-case 4: Get fields of a constructor Another typical thing to want to do is to use the field names of a constructor. So for example: λ> data X = X { foo :: Int, bar :: Char } deriving (Typeable,Data) λ> toConstr (X 0 'a') X λ> constrFields (toConstr (X 0 'a')) ["foo","bar"] It’s a good use-case for serializing and debugging. Use-case 5: Make a real value from its constructor It’s actually possible to produce a value from its constructor. We have this function fromConstr :: Data a => Constr -> a Example: λ> fromConstr (toConstr (Nothing :: Maybe ())) :: Maybe () Nothing But what do you do when the constructor has fields? No sweat. We have this function: fromConstrB :: forall a. Data a => (forall d. Data d => d) -> Constr -> a Haskell beginners: Don’t fear the rank-N type. What it’s saying is merely that the fromConstrB function determines what the type of d will be by itself, by looking at Constr. It’s not provided externally by the caller, as it would be if the forall d. were at the same level as the a. Think of it like scope. let a = d in let d = … doesn’t make sense: the d is in a lower scope. That means we can’t just write: fromConstrB (5 :: Int) (toConstr (Just 1 :: Maybe Int)) :: Maybe Int The Int cannot unify with the d because the quantification is one level lower. It basically doesn’t exist outside of the (forall d. Data d => d) (nor can it escape). That’s okay, though. There is a type-class constraint which lets us be generic. We already have a function producing a value of that type: λ> :t fromConstr (toConstr (1 :: Int)) fromConstr (toConstr (1 :: Int)) :: Data a => a So we can just use that: λ> fromConstrB (fromConstr (toConstr (1 :: Int))) (toConstr (Just 1 :: Maybe Int)) :: Maybe Int Just 5 Tada! But wait… What if there’re more fields? How do we provide more than one, and of different types? Enter fromConstrM: fromConstrM :: forall m a. (Monad m, Data a) => (forall d. Data d => m d) -> Constr -> m a Because it’s monadic we can use a state monad to keep an index! Observe: λ> :t execState execState :: State s a -> s -> s λ> :t execState (modify (+1)) execState (modify (+1)) :: Num s => s -> s λ> :t execState (forM_ [1..5] (const (modify (+1)))) execState (forM_ [1..5] (const (modify (+1)))) :: Num s => s-> s λ> execState (forM_ [1..5] (const (modify (+1)))) 5 10 Let’s put this to use with fromConstrM: λ> evalState (fromConstrM (do i <- get modify (+1) return (case i of 0 -> fromConstr (toConstr (5::Int)) 1 -> fromConstr (toConstr 'b'))) (toConstr (Foo 4 'a'))) 0 :: Foo Foo 5 'b' λ> In other words, keep an index starting at 0. Increase it each iteration that fromConstrM does. When we’re at index 0, return an Int, when we’re at index 1, return a Char. Easy! Right? Use-case 6: mapping over data structures generically A common thing to want is to map over a value in a structure-preserving way, but changing its values. For that we have gmapT: gmapT :: forall a. Data a => (forall b. Data b => b -> b) -> a -> a Similar to fromConstr*, there is a rank-n type b that refers to each type in the constructor of type a. It’s easy enough to use: λ> gmapT (\d -> case cast d of Nothing -> d Just x -> fromJust (cast (if isUpper x then '!' else x))) (Foo 4 'a') Foo 4 'a' λ> gmapT (\d -> case cast d of Nothing -> d Just x -> fromJust (cast (if isUpper x then '!' else x))) (Foo 4 'A') Foo 4 '!' Here I’m doing a little check on any field in the constructor of type Char and if it’s upper case, replacing it with !, otherwise leaving it as-is. The first trick is to use the cast function we used earlier to reify the generic d into something real (Char). The second trick is to cast our concrete Char back into a generic d type. Just like fromConstrM earlier, if you want to operate on exact indices of the constructor rather than going by type, you can use gmapM and use a state monad to do the same thing as we did before. Use-case 7: generating from data structures generically Another slightly different use-case is to walk over the values of a data structure, collecting the result. You can do this with gmapM and a state monad or a writer, but there’s a handy function already to do this: gmapQ :: forall a. Data a => (forall d. Data d => d -> u) -> a -> [u] Trivial example: λ> gmapQ (\d -> toConstr d) (Foo 5 'a') [5,'a'] A more useful example can be found in structured-haskell-mode which walks over the Haskell syntax tree and collects source spans into a flat list. Printer example Here’s a trivial (not very good, but something I wrote once) generic printer: gshows :: Data a => a -> ShowS gshows = render `extQ` (shows :: String -> ShowS) where render t | isTuple = showChar '(' . drop 1 . commaSlots . showChar ')' | isNull = showString "[]" | isList = showChar '[' . drop 1 . listSlots . showChar ']' | otherwise = showChar '(' . constructor . slots . showChar ')' where constructor = showString . showConstr . toConstr $ t slots = foldr (.) id . gmapQ ((showChar ' ' .) . gshows) $ t commaSlots = foldr (.) id . gmapQ ((showChar ',' .) . gshows) $ t listSlots = foldr (.) id . init . gmapQ ((showChar ',' .) . gshows) $ t isTuple = all (==',') (filter (not . flip elem "()") (constructor "")) isNull = null (filter (not . flip elem "[]") (constructor "")) isList = constructor "" == "(:)" I wrote it because the GHC API doesn’t have Show instances for most of its data types, so it’s rather hard to actually inspect any data types that you’re working with in the REPL. It has instances for pretty printing, but pretty printing confuses presentation with data. Example: λ> data Foo = Foo Char Int deriving (Data,Typeable) λ> gshow ([Just 2],'c',Foo 'a' 5) "([(Just (2))],('c'),(Foo ('a') (5)))" Note: no Show instance for Foo. Summary We’ve briefly covered how to query types, how to cast them, how to walk over them or generate from them. There’re other things one can do, but those are the main things. The real trick is understanding how to make the types work and that comes with a bit of experience. Fiddle around with the concepts above and you should gain an intution for what is possible with this library. See also: Data.Generics.Aliases. Hope it helps! I’ll migrate this to the HaskellWiki when it doesn’t look so, uh, shall we say, unattractive.↩ 2014-04-22T00:00:00Z

'R' is fer Realz, yo!

Planet Haskell - 16 hours 31 min ago
'R' is for Let's Get Real (the Real Number line)... because this reality you're living in?It isn't. It's entirely manufactured for you and by you, and you have no clue that you're living in this unreality that you think isn't.It's as simple, and as pervasive, as this:The 'Real Numbers'?They aren't.Let's take a step back.First, as you know, there was counting, and that started, naturally, from the number one.But even that statement is so obviously flawed and ridiculous on the face of it to a modern-day mathematician. Why are you starting with the number one? Why aren't you starting with the number zero?This isn't an argument over semantics, by the way, this has real (heh!), fundamental impact, for the number one, in counting, is not the identity. You cannot add one to a number and get that number back, and if you can't do that, you don't have a category (a 'Ring'), and if you don't have that, you have nothing, because your number system has no basis, no foundation, and anything goes because nothing is sure.But getting to the number zero, admitting that it exists, even though it represents the zero quantity, so technically, it refers to things that don't exist (and therefore a fundamental problem with the number zero in ancient societies) ... well, there was a philosopher who posited that the number zero existed.He was summarily executed by Plato and his 'platonic' buddies because he had spouted heresy.If number zero exists, then you had to be able to divide by it, and when you did, you got infinity. And, obviously, infinity cannot be allowed to exist, so no number zero for you.We went on for a long, long time without the number zero. Even unto today. You study the German rule, then you learn your multiplication tables starting from which number? Not zero: one."One times one is one. One times two is two. One times three is three."Where is the recitation for the "Zero times ..."?And I mean 'we' as Western society, as shaped by Western philosophy. The Easterners, lead by the Indians, had no problem admitting the number zero, they even had a symbol for it, 0, and then playing in the infinite playground it opened up, and therefore Eastern philosophy thrived, flourished, while Western philosophy, and society, stagnated, ...... for one thousand years, ...Just because ... now get this, ... just because people refused to open their eyes to a new way of seeing the world, ...... through the number zero.BOOM!That's what happened when finally West met East, through exchange of ideas through trade (spices) and the Crusades (coffee served with croissants), and philosophers started talking, and the number zero was raised as a possibility.BOOM!Mathematics, mathematical ideas, and ideas, themselves, exploded onto the world and into though. Now that there was zero, there was infinity, now that there was infinity, and it was tenable, people now had the freedom to explore spaces that didn't exist anymore. People could go to the New World now, both figuratively and literally.For growth in Mathematics comes from opening up your mind the possibilities you wouldn't ('couldn't') consider before, and growth in Mathematics leads to opening the mind further.Take, for example, the expansion of the counting numbers, from not admitting zero to, now, admitting it, yes, but then the fractional numbers. You could count fractionally now.From zero to infinity there were an infinite number of numbers, but, with fractions, we now found that from zero to one, there were an equal number of infinite number of fractions.The really neat discovery was that if you put all the fractions in one set, and you put all the counting numbers into another, there was a one-to-one correspondence between the two.An infinity of counting numbers was the same size as an infinity of counting-by-fraction numbers.Wow.So, infinity was the biggest number, fer realz, then, eh?No, not really.Because then came what we call the 'Real Numbers' (which aren't, not by a long shot), and then we found an infinite number of numbers between one-half and one-third.But the thing with these numbers?The were rationals (fractional) in there, to be sure, but they were also irrationals.There were numbers like π and τ and e and other weird and wonderful numbers, and the problem with these numbers was that there was no correspondence between them and the rational numbers. There was no way you could combine rational numbers in any form and point directly to τ, for example. These numbers were transcendent.What's more: they were more. There were infinitely more transcendent numbers, irrational numbers, than there were rationals.And not even countably infinite more, they were uncountably more infinite.There was an infinite that was bigger than infinity, and this we call the Continuum.Why? Because between zero and one and then between one and two there's this measured, discrete, gap, and this we use for counting. There's a measured, even, step between the counting numbers, and even between the fractional numbers: you can count by them, because between them there is this discreteness.Between the Reals there's no measurable gap. You can't count by them, and you can't add 'just this much' (every time) to go from τ to π ...(Heh, actually, you can: π = τ + τ, but then what? What's the 'next' irrational number? There's no such thing as the 'next' irrational number, because no matter what 'next' number you choose, there will always be an uncountably infinite number of numbers between that 'next' number and the number you started from, so your 'next' number isn't the next number at all, and never will be.)So, wow, the Reals. Lots of them. They cover everything, then, right?Not even close.There are numbers that are not numbers.For example, what is the number of all the functions that yield the number zero?There are, in fact, an infinite number of those.How about all the functions that give you ('eventually') π?... Ooh! There are several different ones to find π, aren't they?Yes. Yes, there are. In fact, there are an uncountably infinite number of functions that compute π.Now, wait. You're saying, geophf, that there are uncountably infinite number of functions to find each and every Real number and that the Real numbers are uncountable as well, so that means...Yeah, the Continuum reigned supreme for just a while as the biggest-big number, but it was soon toppled by this Powerset infinity (my term for it, it's actually called something else).Now, I don't know the relation between the functions that yield numbers, and the functions that construct functions that do that.But do you see where we're going with this?As big as you can stretch yourself, there's new vistas to see in mathematics (and meta-mathematics, let's not neglect that, now, shall we?).But we still haven't scratched the surface.Is the world quantized, like the rational numbers? Or is it a continuum like the Reals?Or is something else, even something more?Electricity.Direct current comes to you in a straight, steady line.The thing about DC ('direct current')? It sucks.(Just ask Marvel.)If you want clean, pure, ... powerful, ... well: power, over any kind of distance, you have to ask ... not our friend Tommy (Edison), but our wild-child Tesla.He proposed to Edison that we should use AC ('alternating current') to provide electricity, and Edison threw him out of his lab, that idiot, telling him never to show his face there again.Guess how your electricity is delivered to your home today?The thing about alternating current? It's a wave-form, and not only that, it's a triple wave-form. How do real numbers model electricity? Well, with DC, you've got one number: "That there is 5 volts or 5 amps or 1.21 gigiwatts."Boy, that last one came out of the blue: like a bolt of lightning!Heh.But if it's alternating current, then you need the sine and cosine functions to describe your power. Functions? Wouldn't it be nice if it were just a number?Yes, it would be nice, and there is a number to describe wave-form functions, like your alternating current.'Imaginary' numbers.They are called 'imaginary numbers,' because, if you look hard enough on the number line, with good enough eyes, eventually you'll see the number π or τ or e or 1, or 2, or 3, or even zero.But no matter how hard you strain your eyes, you will never see a number with an imaginary component, because why? Because most imaginary numbers, being on the curve of the wave-form are either above or below the number line. They 'aren't' numbers, then, because they're not on the numberline.They're imaginary.I mean, come on! The square-root of negative one? Why would anybody do this? Unless they were daft or a bit batty.The thing is, without imaginary numbers, we wouldn't have the forms to get our heads around alternating current.Most of the world, except those very lucky few who lived within a mile or two of a power plant, would be in darkness.And the computer? Pfft! Don't get me started. Hie thee to the nunnery, because we are now back in the Dark Ages.Or at most in the Age of 'Enlightenment' where you had to run for cover when the landlady bellowed "'ware slops!" ... unless you wanted raw sewage mixed with rotted fruit on your head.But now, here we are, because we have both Real and imaginary numbers, together giving us the complex number set (which, it turns out, is not bigger than the Reals, as there is a one-to-one correspondence between each real number and each complex number. Fancy that! An infinity 'more' number of complex number above and below the Real number line gives the same number of complex numbers as Reals).We're good?Not even close.Last I checked, I don't live in Flatland, and, last I checked, nor do you.Complex go 'above' and 'below' the Real number line, but ... what about the third dimension? Is there numbers to model us in three dimension?What would such numbers look like?And here's a stunner. If I were on Mars, or the Moon, and you were here, reading this blog post, how would I know where to look to see you?The Moon, and Mars, too, has their own three-dimensional frames of reference, and the Earth has its own, too (it's called geocentric). So, to draw a line from a satellite (such as the Moon, known as 'Earth's satellite') so that it can look down at a spot on the Earth, you actually have to use a four-dimensional number to connect the two three-dimensional frames of reference so that one can look at the other. This four-dimensional number is called the Quaternion.It's simple, really, it's just rocket science.And it's really ... mind-bending, at first, wrapping your head around the math and drawing pictures, or using both your hands, three fingers out indicating both sets of axes, and you use your nose to draw the line connecting the two, and then you scream, 'but how do I measure the angles?'Not that I've worked on satellite projects or anything. cough-EarthWatch-cough.But nowadays, you can't get into making a realistic game without having quaternions under your belt. Monster sees you, monster charges you, monster bonks you on the head. Game over, thank you for playing, please insert $1.50 to die ... that is to say, to try again.The thing is: how does the monster 'see' you? The monster has it's own frame of reference, just as you do. The monster exists in its own three-dimensional coordinate system, just as you do. If you were standing on a little hillock, would you expect the monster not to see you because you're slightly elevated?Of course not! The monster sees you, the monster bonks you. All of this happens through transformation of disparate coordinate systems via quaternions.Now that's something to impress people with at cocktail parties.'Yeah, I spent all day translating coordinate systems using quaternions. Busy day, busy day."Just don't say that to a mathematician, because he'll (in general, 'he') will pause, scratch his head then ask: "So you were checking out babes checking you out?"Then you'll have to admit that, no, not that, you were instead avoiding your boss trying to catch your eye so he could hand you a whole stack of TPS reports to work on over the weekend.Like I ... didn't. Ooh. Ouch! Guess who was working through Easter?Fun, fun!Okay, though. Four dimensions. We've got it all, now, right?Not if you're a bee.Okay, where did that come from?Bees see the world differently from you and me.Please reflect on the syntax of that sentence, writers.Bees see the world differently (not: 'different') from you and me (not: 'from you and I').Gosh! Where is the (American) English language going?(But I digress.)(As always.)If you look at how they communicate through their dance, you see an orientation, but you also see some other activity, they 'waggle' (so it's called the 'waggle-dance') and the vigor at which they do their waggle communicates a more interesting cache of nectar. There are other factors, too.The fact of the matter: three dimensions are not enough for the bee's dance to communicate what it needs to say to the other drones about the location, distance, and quantity of nectar to be found.So, it has its waggle-dance to communicate this information. Everybody knows this.Until, one little girl, working on her Ph.D. in mathematics, stopped by her daddy's apiary, and, at his invitation, watched what he was doing.Eureka."Daddy," she said, "those bees are dancing in six dimensions."Guess who changed the topic of her Ph.D., right then and there?Combining distance, angle from the sun, quantity of interest, ... all the factors, the bees came up with a dance.They had only three dimensions to communicate six things.The thing is, nobody told the bees they had only three dimensions to work with. So they do their dance in six dimension.If you map what they are doing up to the sixth dimension, it gives a simple (six-dimensional) vector, which conveys all the information in one number.Bees live in six dimensions, and they live pretty darn well in it, too.Or, put this way: 80% of the world's food supply would disappear if bees didn't do what they did.You are living in six dimensions, or, more correctly, you are alive now, thanks to a little six-dimensional dance.Six dimensions.Okay, but what if you're Buckaroo Banzai?Pfft! Eight dimensions? Light weight.In String Theory, we need at least ten dimensions for super strings, and twenty-six dimensions for some types of strings.So, 'R' is for real numbers.The neat thing about numbers, is ... they can get as big as you can think them.And they're good for a Ph.D. thesis ... or two.http://logicaltypes.blogspot.com/2014/04/f-is-for-function.html

Bellman Confirms A Suspicion

Planet Haskell - 16 hours 31 min ago
As is by now well-known, I regard the supposed opposition between static and dynamic languages as a fallacy: the latter, being a special case of the former, can scarcely be an alternative to it.  I cannot tell you how many times I’ve been handed arguments along the lines of “oh, static languages are just fine, but I want […]

Setting up Samsung Wireless Printer on Linux

Planet Haskell - 16 hours 31 min ago
Here’s a complete guide for setting up a wireless Samsung printer on Linux, where by “setting up” I mean making it connect to your wireless network. It worked for me with Samsung ML-2165W on Debian GNU/Linux «jessie», but should work for other models and distributions, too. Connecting Samsung printer to a wireless network Create a new, temporary user. We’ll use it to launch Samsung’s wireless setup utility. This is optional, but it provides an additional layer of security (who knows what those utilities from Samsung do behind the scenes) and ensures that nothing will mess with your system. We add the new user to the lp group, so that it can talk to the printer. user$ sudo useradd --create-home --shell /bin/bash --groups lp samsung Allow the new user to use our display. (Samsung’s utility is graphical.) user$ xhost +local:samsung Now, time to switch to our temporary user. user$ sudo su - samsung Download Samsung’s PSU (“Printer Settings Utility”) archive from their website. Unpack it and go to the wirelesssetup directory. samsung$ wget http://downloadcenter.samsung.com/content/DR/201110/20111019151150392/PSU_1.01.tar.gz samsung$ tar xzf PSU_1.01.tar.gz samsung$ cd cdroot/Linux/wirelesssetup Check if there are any missing dynamic libraries: samsung$ ldd bin/wirelesssetup | grep 'not found' (Note: this is for a 32-bit system. On a 64-bit system, replace bin with bin64.) In my case, the output was libnetsnmp.so.10 => not found This particular library is included in the PSU archive, so we load it by samsung$ export LD_PRELOAD=$PWD/../psu/share/lib/libnetsnmp.so.10.0.2 (Likewise, replace lib with lib64 on a 64-bit system.) If there are more missing libraries, first see if your distribution ships them. The major versions must match! E.g. Debian jessie ships libnetsnmp.so.30.0.2, which has the major version number 30, so that won’t do. If your distribution doesn’t have the right version, use a resource like http://rpm.pbone.net/ to find a package that has one. Unpack it (do not install!) and set LD_PRELOAD and/or LD_LIBRARY_PATH so that they are found. Now connect the printer via a USB cable to the Linux machine and run samsung$ bin/wirelesssetup /dev/usb/lp0 A graphical window should appear, where you’ll be able to choose your wireless network and enter the password to it. After you made the printer connect to the wireless network, you can logout and remove the temporary user. Note that the command below will remove that user’s home directory. user$ sudo userdel --remove samsung Troubleshooting Please do not ask me about the problems you may have with your printer. Either try to solve them yourself, or use the usual venues (forums, mailing lists, Samsung support etc.) to ask for help. However, if you solved your problems, and they were related to the instructions above, please do contact me so that I can fix/update the instructions. If this article helped you and you want to say thanks, that’s fine, too :-) 2014-04-20T21:00:00Zhttp://ro-che.info/articles/2014-04-21-wireless-samsung-printer-linux.html

CUFP 2014 Call For Presentations

Planet Haskell - 16 hours 31 min ago
Workshop for Commercial Users of Functional Programming 2014 Sponsored by SIGPLAN [CUFP 2014](http://cufp.org/conference) Co-located with ICFP 2014 Gothenburg, Sweden Sep 4-6 Talk Proposal Submission Deadline: 27 June 2014 CUFP 2014 Presentation Submission Form The annual CUFP workshop is a place where people can see how others are using functional programming to solve real world problems; where practitioners meet and collaborate; where language designers and [...]

Announcing cabal 1.20

Planet Haskell - 16 hours 31 min ago
On behalf of all cabal contributors, I'm proud to announce cabal 1.20. This is quite a big release, with 404 commits since 1.18. To install: cabal update cabal install Cabal-1.20.0.0 cabal-install-1.20.0.0 New features Since there are 404 commits since cabal 1.18, there are too many changes to give all of them a just treatment here. I've cherry-picked some that I thought you would find interesting. To see the full list of commits, run: git log Cabal-v1.18.1.3..Cabal-v1.20.0.0 in the cabal repo. Dependency freezing If you're building an application that you're delivering to customers, either as binary or as something that runs on your servers, you want to make sure unwanted changes don't sneak in between releases. For example, say you've found a bug in the just released 1.0.0.0 version and you want to release 1.0.0.1, which contains a fix. If you build the binary on a different machine than you built the release on, you risk building it against a slightly different set of dependencies, which means that your introducing untested code into your application, possible causing new bugs. cabal freeze lets developers of applications freeze the set of dependencies used to make builds reproducible. cabal freeze computes a valid set of package versions, just like cabal install does, and stores the result in cabal.config. You can then check cabal.config into your source control system to make sure that all developers that work on the application build against the exact same set of dependencies. Here's the contents of the cabal.config file created by running cabal freeze in the cabal-install repo: constraints: ... HTTP ==4000.2.8, array ==0.4.0.1, base ==4.6.0.1, bytestring ==0.10.0.2, ... If you later want to update the set of dependencies either: manually edit cabal.config or delete (parts of) cabal.config and re-run cabal freeze. Note that you should only use cabal freeze on applications you develop in-house, not on packages you intend to upload to Hackage. Parallel cabal build Starting with 7.8, GHC now accepts a -j flag to allow using multiple CPU cores to build a single package. This build performance enhancement is now exposed as a -j flag on cabal build (and also cabal test/bench/run). Build time improvements are modest, but free. Flag to temporary ignore upper bounds When new major versions of a package P is released, it usually takes a little while for packages that depend on P to update their version bounds to allow for the new version. This can be frustrating if you just want to test if your own package, which depends on both some of these other packages and on P, builds with the new P. The --allow-newer=P flag tells the dependency solver to ignore all upper version bounds on P, allowing you to try to compile all packages against this newer version. Unnecessary re-linking avoidance Before 1.20, cabal would sometimes re-link build targets that hadn't changed. For example, if you had several test suites that tested different parts of your library, every test suite would be re-linked when you ran cabal test, even if no source file that the test suite depends on was changed. The same thing would happen for executables and benchmarks. Now cabal doesn't re-link executables (of any kind) unless something changed. Streaming cabal test output cabal test can now stream its output to stdout, making it easier to see progress for long-running tests. Streaming output is enabled by passing --show-details=streaming to cabal test and is off by default (for now.) New cabal exec command Cabal sandboxes have almost completely replaced previous sandbox implementations. There was one use case that wasn't completely captured by the integrated sandbox implementation, namely starting new shells where the environment was set up to automatically use the sandbox for all GHC commands. cabal exec allows you to launch any binary in an environment where the sandbox package DB is used by default. In particular you can launch a new shell using cabal exec [ba]sh. Haddock configuration options Haddock options can now be set in ~/.cabal/config. Here are the options and their default values: haddock -- keep-temp-files: False -- hoogle: False -- html: False -- html-location: -- executables: False -- tests: False -- benchmarks: False -- all: -- internal: False -- css: -- hyperlink-source: False -- hscolour-css: -- contents-location: How to use cabal While not strictly related to this release, I thought I'd share how we expect users to use cabal. Using cabal this way makes sure that the features work well for you, now and in the future. The main message is this: to build a package, use build, not install. Building packages using cabal install comes from a time when installing dependencies was more difficult, depending on non-published packages was difficult (i.e. no sandbox add-source), and using the other commands required manual configure-ing. My intention is to remove the need for install from the development workflow altogether. Today the recommended way to build a package is to run this once: cabal sandbox init cabal install --only-dep # optionally: --enable-tests The development workflow is then just cabal build/test/bench/run No configuring (or manual rebuilding) needed. build implies configure and test/bench/run imply build. Soon build will also imply the above dependency installation, when running in a sandbox. Credits Here are the contributors for this release, ordered by number of commits: Mikhail Glushenkov Johan Tibell Duncan Coutts Thomas Tuegel Ian D. Bollinger Ben Armston Niklas Hambüchen Daniel Trstenjak Tuncer Ayaz Herbert Valerio Riedel Tillmann Rendel Liyang HU Dominic Steinitz Brent Yorgey Ricky Elrod Geoff Nixon Florian Hartwig Bob Ippolito Björn Peemöller Wojciech Danilo Sergei Trofimovich Ryan Trinkle Ryan Newton Roman Cheplyaka Peter Selinger Niklas Haas Nikita Karetnikov Nick Smallbone Mike Craig Markus Pfeiffer Luke Iannini Luite Stegeman John Lato Jens Petersen Jason Dagit Gabor Pali Daniel Velkov Ben Millwood Apologies if someone was left out. Once in a while we have to make commits on behalf of an author. Those authors are not captured above.

JSON validation combinators

Planet Haskell - 16 hours 31 min ago
Why write a custom JSON validator? At Signal Vine we have a JSON-based language for describing text messaging campaigns. We may design a better surface syntax for this language in the future, but the current one gets the job done and there are certain existing systems that depend on it. Anyway, the problem with this language is that it is too easy to make mistakes — including errors in JSON syntax, structural errors (plugging an array where an object is expected), or name errors (making a typo in a field name). So the first thing I did was write a validator for our JSON format. There are several projects of «JSON schemas» around, but there were many reasons against using them. I don’t know about the quality of the tools that support such schemas (i.e. the quality of error messages they generate), and the expressivity of the schemas themselves (whether they’d let us to express the structure of our JSON structure and the constraints we’d like to enforce). So, though it may seem that using an existing solution is «free», it is not — I’d have to spend time learning and evaluating these existing solutions. I remember that we went through this in our team at Barclays, and eventually decided to create a custom JSON schema language, although I was not involved in the evaluation process, so can’t share the details. I was almost certain that no existing «generic JSON schema» solution can provide the power of a custom one. For instance, some of the JSON strings contain expressions in another in-house mini-language. Ideally, I’d like to parse those expressions while I am parsing the enclosing JSON structure, and give locations of possible errors as they appear in the JSON file. I’d need a parser for the language anyway. Maintaining a schema separately from the parser would mean one more thing to keep in sync and worry about. I couldn’t use an existing JSON parsing library either. Of course, aeson was out of question, being notorious for its poor error messages (since it’s based on attoparsec and optimized for speed). json, though, is based on parsec, so its error messages are better. But there is a deeper reason why a JSON parsing library is inadequate for validation. All of the existing JSON libraries first parse into a generic JSON structure, and only then do they try to recognize the specific format and convert to a value of the target Haskell type. Which means that during parsing, only JSON syntax errors will be detected, but not the other kinds of errors described above. Granted, they all can be detected sooner or later. But what differentiates sooner from later is that once we’re out of the parsing monad, we no longer have access to the position information (unless, of course, our JSON parsing library does extra work to store locations in the parsed JSON structure — which it typically doesn’t). And not having such position information severely impacts our ability to produce good error messages. To summarize, in order to provide good diagnostics, it is important to parse exactly the language we expect (and not some superset thereof), and to perform all the checks in the parsing monad, where we have access to location information. JSON parsing combinators Even though I couldn’t re-use an existing JSON parser or schema, I still wanted my parser to be high-level, and ideally to resemble a JSON schema, just embedded in Haskell. The rest of this article describes the JSON schema combinators I wrote for this purpose. Strings As I mentioned before, the json package uses parsec underneath, so I was able to reuse some basic definitions from there — most notably, p_string, which parses a JSON string. This is fortunate, because handling escape sequences is not straightforward, and I’d rather use a well-tested existing implementation. string :: Parser String string = {- copied from Text.JSON.Parsec -} I introduced one other combinator, theString, which parses a given string: theString :: String -> Parser () theString str = (<?> "\"" ++ str ++ "\"") $ try $ do str' <- string if str == str' then return () else empty Objects Objects are an interesting case because we know what set of fields to expect, but not the order in which they come (it may be arbitrary). Such syntax is known as a «permutation phrase», and can be parsed as described in the classical paper Parsing Permutation Phrases by Arthur Baars, Andres Löh and Doaitse Swierstra. There are surprisingly many implementations of permutation parsing on hackage, including one in parsec itself. Most of them suffer from one or both of the following issues: they use custom combinators, which, despite being similar to Applicative and Alternative operators, have their quirks and require learning they don’t support permutation phrases with separators, which is obviously required to parse JSON objects. (The technique to parse permutation phrases with separators was descibed in the original paper, too.) On the other hand, the action-permutations library by Ross Paterson addresses both of these issues. It provides the familiar Applicative interface to combine permutation elements (or atoms, as it calls them), and includes the function runPermsSep to parse phrases with separators. The interface is also very generic, requiring the underlying functor to be just Alternative. Below are the combinators for parsing JSON objects. field parses a single object field (or member, as it’s called in the JSON spec), using the supplied parser to parse the field’s value. optField is similar, except it returns Nothing if the field is absent (in which case field would produce an error message). Finally, theField is a shortcut to parse a field with the fixed contents. It is useful when there’s a tag-like field identifying the type/class of the object, for instance data Item = Book String -- writer | Song String -- composer String -- singer item = (try . object $ Book <$ theField "type" "book" <*> field "writer" string) <|> (try . object $ Song <$ theField "type" "song" <*> field "composer" string <*> field "singer" string) (Note: examples in this article have nothing to do with the JSON schema we actually use at Signal Vine.) One thing to pay attention to is how field parsers (field, theField and optField) have a different type from the ordinary parsers. This makes it much easier to reason about what actually gets permuted. object :: Perms Parser a -> Parser a object fields = (<?> "JSON object") $ between (tok (char '{')) (tok (char '}')) $ runPermsSep (tok (char ',')) fields -- common function used by field and optField field' :: String -- key name -> Parser a -- value parser -> Parser a field' key value = theString key *> tok (char ':') *> value field :: String -- key name -> Parser a -- value parser -> Perms Parser a field key value = atom $ field' key value theField :: String -- key name -> String -- expected value -> Perms Parser () theField key value = () <$ field key (theString value) optField :: String -- key name -> Parser a -- value parser -> Perms Parser (Maybe a) optField key value = maybeAtom $ field' key value Aside: correct separator parsing There was only one issue I ran into with action-permutations, and it is interesting enough that I decided to describe it here in more detail. Consider, for example, the expression runPermsSep sep (f <$> atom a <*> atom b <*> atom c) It would expand to (flip ($) <$> a <*> ( (sep *> (flip ($) <$> b <*> (sep *> (flip ($) <$> c <*> pure (\xc xb xa -> f xc xb xa))))) <|> (sep *> (flip ($) <$> c <*> (sep *> (flip ($) <$> b <*> pure (\xc xb xa -> f xb xc xa))))) )) <|> (flip ($) <$> b <*> ( ... )) <|> (flip ($) <$> c <*> ( ... )) See the problem? Suppose the actual order of the atoms in the input stream is a, c, b. At the beginning the parser is lucky to enter the right branch (the one starting from flip ($) <$> a <*> ...) on the first guess. After that, it has two alternatives: b-then-c, or c-then-b. First it enters the b-then-c branch (i.e. the wrong one) and fails. However, it fails after having consumed some input (namely, the separator) — which in libraries like parsec and trifecta means that the other branch (the right one) won’t be considered. We cannot even work around this outside of the library by using try, because we can’t insert it in the right place. E.g. wrapping the separator in try won’t work. The right place to insert try would be around the whole alternative (sep *> (flip ($) <$> b <*> (sep *> (flip ($) <$> c <*> pure (\xc xb xa -> f xc xb xa))))) but this piece is generated by the lirbary and, as a library user, we have no control over it. The usage of try inside the library itself is unsatisfactory, too. Remember, the interface only assumes the Alternative instance, which has no notion of try. If we had to make it less generic by imposing a Parsing constraint, that would be really unfortunate. Fortunately, once identified, this problem is not hard to fix properly — and no usage of try is required! All we need is to change runPermsSep so that it expands to the tree where separator parsing is factored out: (flip ($) <$> a <*> sep *> ( (flip ($) <$> b <*> sep *> (flip ($) <$> c <*> pure (\xc xb xa -> f xc xb xa))))) <|> (flip ($) <$> c <*> sep *> (flip ($) <$> b <*> pure (\xc xb xa -> f xb xc xa))) )) <|> (flip ($) <$> b <*> sep *> ( ... )) <|> (flip ($) <$> c <*> sep *> ( ... )) Now, all alternatives start with atoms, so we have full control over whether they consume any input. Mathematically, this demonstrates that <*> does not distribute over <|> for some backtracking parsers. Note that such distributive property is not required by the Alternative class. Even for parser monads that allow backtracking by default (attoparsec, polyparse) and for which there’s no semantic difference between the two versions, this change improves efficiency by sharing separator parsing across branches. My patch fixing the issue has been incorporated into the version 0.0.0.1 of action-permutations. Arrays Arrays should be easier to parse than objects in that the order of the elements is fixed. Still, we need to handle separators (commas) between array elements. If we interpreted arrays as lists, then the schema combinator for arrays might look like array :: Parser a -- parser for a signle element -> Parser [a] -- parser for the array Implementation would be straightforward, too. However, in our JSON schema we use arrays as tuples rather than lists. That is, we typically expect an array of a fixed number of heterogeneous elements. Thus we’d like to combine these tuple elements into a single parser using the applicative interface. Let’s say we expect a 2-tuple of a string (a person’s name) and an object (that person’s address). data Address -- = ... data Person = Person String -- name Address Written by hand, the parser may look like address :: Parser Address address = _ personAddress = between (tok (char '[')) (tok (char ']')) $ Person <$> string <* sep <*> address where sep = tok $ char ',' It makes sense to move brackets parsing to the array combinator: array :: Parser a -> Parser a array p = (<?> "JSON array") $ between (tok (char '[')) (tok (char ']')) p But what should we do with the commas? Manually interspersing elements with separators is error-prone and doesn’t correspond to my view of a high-level JSON schema description. Inserting comma parsers automatically isn’t impossible — after all, it is done in the action-permutations package and we used it to parse object fields, which are comma-separated, too. But it cannot be as easy as adding a separator to every element since there’s one less separators than elements. We have somehow to detect the last element and not to expect a separator after it. A nice and simple way to achieve this is with a free applicative functor. A free applicative functor will allow us to capture the whole applicative expression and postpone the decision on where to insert separator parsers until we can tell which element is the last one. In this case we’ll use Twan van Laarhoven’s free applicative, as implemented in the free package. element :: Parser a -> Ap Parser a element = liftAp theArray :: Ap Parser a -> Parser a theArray p = between (tok (char '[')) (tok (char ']')) $ go p where go :: Ap Parser a -> Parser a go (Ap p (Pure f)) = f <$> p go (Ap p1 pn) = flip id <$> p1 <* tok (char ',') <*> go pn Like object fields, array elements have a special type which makes it clear which pieces exactly are comma-separated. In fact, the applicative functor Perms is essentially the free applicative functor Ap plus branching. Optional array elements Now comes the twist. Some of the array elements may be optional — in the same way as positional function arguments in some languages may be optional. Since the elements are positional, if one of them is omitted, all subsequent ones have to be omitted, too — otherwise we won’t be able to tell which one was omitted, exactly. For that reason, all optional elements should come after all the non-optional ones; if not, then we’ve made a mistake while designing (or describing) our schema. Ideally, our solution should catch such mistakes, too. So, how can the above solution be adapted to handle optional arguments? Attempt #1: optElement :: Parser a -> Ap Parser (Maybe a) optElement p = element $ optional p Here optional is a combinator defined in Control.Applicative as optional v = Just <$> v <|> pure Nothing This won’t work at all, as it doesn’t give us any information about whether it’s an optional element or just a parser that happens to return a Maybe type. Attempt #2: Well, let’s just add a flag indicating whether the element was created using optElement, shall we? data El a = El Bool -- is optional? (Parser a) element :: Parser a -> Ap El a element = liftAp . El False optElement :: Parser a -> Ap El (Maybe a) optElement = liftAp . El True . optional Now we can check that optional arguments come after non-optional ones. If an element’s parse result is Nothing, we also know whether that element is an optional one, and whether we should stop trying to parse the subsequent elements. Still, there are two related issues preventing this version from working: How do we actually know when a parser returns Nothing? Once we lift a parser into the free applicative, its return type becomes existentially quantified, i.e. we should treat it as polymorphic and cannot assume it has form Maybe a (by pattern-matching on it), even if we can convince ourselves by looking at the Bool flag that it indeed must be of that form. Similarly, once we’ve detected an absent optional element (assuming for a second that it is possible), we have to force all the remaining optional parsers to return Nothing without parsing anything. But again, we cannot convince the compiler that Nothing is an acceptable return value of those parsers. Attempt #3: So, we need certain run-time values (the optionality flag) to introduce type-level information (namely, that the parser’s return type has form Maybe a). That’s exactly what GADTs do! data El a where El :: Parser a -> El a OptEl :: Parser a -> El (Maybe a) El’s two constructors are a more powerful version of our old Bool flag. They let us see whether the element is optional, and if so, guarantee that its parser’s return type is Maybeish. And here’s the code for the parsing functions: element :: Parser a -> Ap El a element = liftAp . El optElement :: Parser a -> Ap El (Maybe a) optElement = liftAp . OptEl theArray :: Ap El a -> Parser a theArray p = between (tok (char '[')) (tok (char ']')) $ go True False False p where go :: Bool -> Bool -> Bool -> Ap El a -> Parser a go _ _ _ (Pure x) = pure x go isFirst optionalOccurred optionalOmitted (Ap el1 eln) = let eltSequenceError :: a eltSequenceError = error "theArray: a non-optional element after an optional one" !_check = case el1 of El {} | optionalOccurred -> eltSequenceError _ -> () in if optionalOmitted then case el1 of El {} -> eltSequenceError OptEl {} -> go False True True eln <*> pure Nothing else do let sep = if isFirst then pure () else () <$ tok (char ',') case el1 of El p1 -> flip id <$ sep <*> p1 <*> go False False False eln OptEl p1 -> do r1 <- optional $ sep *> p1 go False True (isNothing r1) eln <*> pure r1 theArray is a state machine with three pieces of state: isFirst, optionalOccurred and optionalOmitted. isFirst and optionalOccurred are used to guide actual parsing, while optionalOccurred is needed to check the proper arrangement of optional vs non-optional arguments. Conclusions Although the standard approach to JSON parsing is to parse into a generic JSON representation first, the article shows that an alternative approach — parsing the expected structure directly — is also viable and can be employed to improve error reporting. Of course, the tricks and ideas described here are not specific to JSON. Understanding how they work and how to use them may become handy in a variety of parsing situations. 2014-04-19T21:00:00Zhttp://ro-che.info/articles/2014-04-20-json-validation-combinators.html

Silicon Valley could force NSA reform, tomorrow. What's taking so long?

Planet Haskell - 16 hours 31 min ago
CEOs from Yahoo to Dropbox and Microsoft to Zynga met at the White House, but are they just playing for the cameras? Photograph: Kevin Lamarque / ReutersTrevor Timm asks a key question in The Guardian:The CEOs of the major tech companies came out of the gate swinging 10 months ago, complaining loudly about how NSA surveillance has been destroying privacy and ruining their business. They still are. Facebook founder Mark Zuckerberg recently called the US a "threat" to the Internet, and Eric Schmidt, chairman of Google, called some of the NSA tactics "outrageous" and potentially "illegal". They and their fellow Silicon Valley powerhouses – from Yahoo to Dropbox and Microsoft to Apple and more – formed a coalition calling for surveillance reform and had conversations with the White House.But for all their talk, the public has come away empty handed. The USA Freedom Act, the only major new bill promising real reform, has been stalled in the Judiciary Committee. The House Intelligence bill may be worse than the status quo. Politico reported on Thursday that companies like Facebook and are now "holding fire" on the hill when it comes to pushing for legislative reform....We know it's worked before. Three years ago, when thousands of websites participated in an unprecedented response to internet censorship legislation, the Stop Online Piracy Act (Sopa), the public stopped a once-invincible bill in its tracks. If they really, truly wanted to do something about it, the online giants of Silicon Valley and beyond could design their systems so that even the companies themselves could not access their users' messages by making their texting and instant messaging clients end-to-end encrypted.But the major internet outfits were noticeably absent from this year's similar grassroots protest – dubbed The Day We Fight Back – and refused to alter their websites à la Sopa. If they really believed the NSA was the threat so many of them have claimed, they'd have blacked out their websites in protest already.

I Shall Vote No

Planet Haskell - 16 hours 31 min ago
Spotted on Bella Caledonia.[After Christopher Logue, I Shall Vote Labour (1966)By A.R. FrithI shall vote No because, without Westminster, We’d never have got rid of the Poll TaxI shall vote No because eight hundred thousand Scots live in England, and there are no jobs here to match their talents and meet their aspirationsI shall vote No, because my grandmother was a MacDougallI shall vote No in case Shell and BP leave and take their oil with themI shall vote No because otherwise we would have to give back the pandasI shall vote No because I am feartI shall vote No because the people who promised us a better deal if we voted No in 79, and warned us of the dire consequences of devolution in 97, tell us we shouldI shall vote No so as not to let down my fellow socialists in Billericay and BasildonandI shall vote No, because if we got rid of Trident and stopped taking part in illegal wars we would be a target for terrorismI shall vote No because if I lived under a government that listened to me and had policies I agreed with, I wouldn’t feel BritishI shall vote No because the RAF will bomb our airports if we are a separate countryI shall vote No because to vote Yes dishonours the Dead of the Great War, who laid down their lives for the rights of small nationsI shall vote No, lest being cut off from England turns Red Leicester cheese and Lincolnshire sausages into unobtainable foreign delicacies, like croissants, or bananasI shall vote No, because, as a progressive, I have more in common with Billy Bragg or Tariq Ali, who aren’t Scottish, than some toff like Lord Forsyth, who is.I shall vote No, because the certainty of billions of pounds worth of spending cuts to come is preferable to the uncertainty of wealthI shall vote No, because it is blindingly obvious that Scotlands voice at the UN, and other international bodies, will be much diminished if we are a member-stateI shall vote No because having a parliament with no real power, and another which is run by people we didnt vote for, is the best of both worldsI shall vote No because I trust and admire Nick Clegg, who is promising us Federalism when the Liberals return to officeI shall vote No, because Emma Thompson would vote No, and her Dad did The Magic RoundaboutI shall vote No, because A.C. Grayling would vote No,and his Mum was born on Burns NightI shall vote No because David Bowie asked Kate Moss to tell us to, and he lives in New York and used to be famousI shall vote No, because nobody ever asks me what I thinkI shall vote No, because a triple-A credit rating is vital in the modern worldI shall vote No because things are just fine as they areI shall vote No because the English say they love us,and that if we vote Yes, they will wreck our economy.

Haskell gets static typing right

Planet Haskell - 16 hours 31 min ago
> The following blog post originally appeared as a guest post on the > Skills Matter blog. Well-Typed are regularly teaching both > introductory or advanced Haskell courses at Skills Matter, and we > will also be running a special course on Haskell’s type system. Statically typed languages are often [...]

Autocomplete command-line flags with GHC 7.8.2

Planet Haskell - 16 hours 31 min ago
GHC 7.8.2 has been released just a few days ago1. This is the first official release of GHC that contains my contributions. Most of them are improvements in the code generator and are thus practically invisible to most users. But I also implemented one very nice feature that will be useful to an average Haskeller. […]

Team Scotland

Planet Haskell - 16 hours 31 min ago
What happens after Yes? It's not the SNP, it's the people. A great post on Bella Caledonia. The UK chattering classes have been wondering what a real, mass grass-roots campaign might look like in modern, professionalised politics. Impotent is their usual conclusion. Well come on up and we’ll show you. The old feudal dance where councilor doths cap to MP, MP to Minister, Minister to Prime Minister and Prime Minister to corporate CEO may well continue apace even here in Scotland. But it’s not winning Scotland. 

Help ORG restart the debate about internet filters

Planet Haskell - 16 hours 31 min ago
The Open Rights Group is starting a campaign opposed to the default filtering now imposed by all providers in the UK---de facto censorship. You can fund it via IndieGogo.

Continuations and Exceptions

Planet Haskell - 16 hours 31 min ago
Posted on April 14, 2014 Continuations are useful things. They provide a nice way to manually handle control flow. This fact makes them very useful for compilers, an often used alternative to SSA is an intermediate language in which every function call is a tail-call and every expression has been converted to continuation passing style. Often however, this isn’t enough. In a language which exceptions, we don’t just have a single continuation. Since every expression can either do one of two things. Continue the rest of the program normally Throw an exception and run an alternative program, the exception handler To represent this, we can imagine having two continuations. Instead of newtype Cont r a = Cont {runCont :: (a -> r) -> r} We have {-# LANGUAGE DeriveFunctor #-} import Control.Monad newtype Throws r e a = Throws {runThrows :: (e -> r) -> (a -> r) -> r} deriving (Functor) Now we have two continuations, where e -> r represents the composition of exception handlers. We can write a trivial monad instance similar to Cont instance Monad (Throws r e) where return a = Throws $ \ex cont -> cont a (Throws c) >>= f = Throws $ \ ex cont -> c ex $ \a -> runThrows (f a) e cont So >>= maintains the exception handler between computations and otherwise acts exactly like Cont. To actually take advantage of our exception handlers, we need two things, a throw and catch like pair of function. Let’s start with throw since it’s easiest. throw :: e -> Throws r e a throw e = Throws $ \ex cont -> ex e This is pretty straightforward, when we’re given an exception an to throw, we simply feed it to our exception handler continuation. Since care what value cont needs, we can universally quantify over a. Next up is handle, we’ll represent an exception handler as a function from e -> Maybe a. If we return Nothing, we can’t handle the exception at this level and we’ll just pass it to the existing exception handler. So our handle is handle :: Throws r e a -> (e -> Maybe a) -> Throws r e a handle (Throws rest) handler = Throws $ \ex cont -> rest (\e -> maybe (ex e) cont (handler e)) cont Notice the clever bit here, each handler actually contains both the success and failure continuations! If we can’t handle the exception we fail otherwise we can resume exactly where we were before. No post would be complete without a demonstration! data Ex = Boom | Kaboom | Splat String deriving Show throwEx1 = throw Boom throwEx2 = throw Kaboom throwEx3 = throw (Splat "splat") test = do result <- handle throwEx1 $ \e -> case e of Boom -> Just "foo" _ -> Nothing result2 <- handle throwEx2 $ \e -> case e of Boom -> Just "what" Kaboom -> Just "huh?" _ -> Nothing result3 <- handle throwEx3 $ \e -> case e of Splat s -> Just s _ -> Nothing return (unwords [result, result2, result3]) We can run this with runThrows (error . ("Toplevel fail "++)) test which returns "foo huh? splat" A Note on Either Now we already have a perfectly good system of monadic exception like thing in the form of Either. It might be interesting to note that what we’ve written is in fact isomorphic to Either. (e -> r) -> (a -> r) -> r is just the church representation of Either e a. We can even go all the way and change Throws to newtype Throws e a = Throws {runThrows :: forall r. (e -> r) -> (a -> r) -> r} So there you are, an interesting realization that one of the classic representations of a language like SML is in fact a really complicated version of Either :) /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'codeco'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); Please enable JavaScript to view the comments powered by Disqus. comments powered by Disqus 2014-04-14T00:00:00Z
Syndicate content