Functional Programming

Prime Minister of Singapore plans to learn Haskell

Planet Haskell - 18 hours 19 min ago
The Prime Minister of Singapore, Lee Hsien Loong, plans to learn Haskell.My children are in IT, two of them – both graduated from MIT. One of them browsed a book and said, “Here, read this”. It said “Haskell – learn you a Haskell for great good”, and one day that will be my retirement reading.Spotted by Jeremy Yallop.

Hiatus

Planet Haskell - 18 hours 19 min ago
Hiatus I've decided to take a break from flare / MPv6 during the month of April, and work on my other projects. No doubt I have more ideas than I'll have time for, but here's a quick brainstorm of things I could do: tweak the design of this website a bit consider moving to hakyll rc: get it into fedora rc: new release fixing known problems pacc / rc integration pacc: get it into debian, fedora rc and pacc: arch, nixos?? pacc: develop left recursion ideas pacc: replace make with shake? Looking at the above, I think my plan will be as follows. Fix some rc problems; make new release. Do some more work on rc-pacc. See if I can make any headway on left recursion. Look at getting things into distros. Conveniently, there are 4 weeks in the month, so that's one week on each item. Oh, but I should get the wheels turning on becoming a Fedora contributor straight away. There's plenty to work through!

Building ghcjs with ghc-7.10.1

Planet Haskell - 18 hours 19 min ago
Building ghcjs with ghc-7.10.1 I've been doing some work recently with the wonderful ghcjs. This is an incredibly exciting project, but it's still early days, and actually building the compiler can be tricky, particularly if you want to use the also-new-and-exciting ghc-7.10.1. I've created this recipe that has worked for me several times, and I hope it may be useful to others too. Prerequisites You will need some version of ghc installed (presumably from your distro), happy, alex, and cabal. You will also need gcc and gcc-c++: if you do not have the C++ compiler you will get confusing error messages that /bin/cpp failed sanity check. I usually create a brand new user to make a test build, so that my usual development user can keep working, even if the build fails. Then I'll copy the ghc binaries to my usual user, and repeat the build of ghcjs and its deps. (It's awkward to copy built packages, as they have paths wired into them. There is a way around this, but I'd rather burn some more CPU cycles.) Recipe Download the ghc source. Unpack, and configure with ./configure --prefix /home/ghcjs (or whatever the correct path is). Build with make -j5 (where 5 is one more than the number of CPU cores you have available). Install with make install Install cabal: cabal update git clone https://github.com/haskell/cabal.git cabal install ./cabal/Cabal ./cabal/cabal-install Ensure that ~/.cabal/bin is on path, that cabal install --help includes the --ghcjs option, and that ghc-pkg list Cabal lists Cabal-1.23.0.0. Install versions of some libraries patched to build with ghc-7.10.1: git clone https://github.com/seereason/haskell-src-meta git clone https://github.com/seereason/ansi-wl-pprint --branch pr-base48 git clone https://github.com/seereason/wl-pprint-text git clone https://github.com/seereason/stringsearch cabal install ./haskell-src-meta ./ansi-wl-pprint ./wl-pprint-text ./stringsearch The tar package specifies old-time by default, which causes problems, so build it with the correct flag: cabal install -f-old-time tar I can't quite remember why the lens build needs to be pulled out, but anyway: cabal install lens Install development version of haddock: --allow-newer is needed to cure an outdated upper-bound in haddock-api: git clone https://github.com/haskell/haddock.git cabal install --allow-newer ./haddock/haddock-api ./haddock/haddock-library Finally we are ready to install ghcjs itself: git clone https://github.com/ghcjs/ghcjs-prim.git git clone https://github.com/ghcjs/ghcjs.git cabal install ./ghcjs ./ghcjs-prim ghcjs-boot --dev --ghcjs-boot-dev-branch ghc-7.10 That's all folks! $ ghcjs -V The Glorious Glasgow Haskell Compilation System for JavaScript, version 0.1.0 (GHC 7.10.1) Notes I will endeavour to keep this document up to date as packages evolve. If you do encounter any trouble following these instructions, I'd like to hear about it. An alternative approach to installing ghcjs is to use the Nix package manager: ryantrinkle has been working hard on a Nix ghcjs package. I've not had the chance to look at this yet, but will do so as soon as time permits.

Losing our privacy

Planet Haskell - 18 hours 19 min ago
Big brother of NSAOn the 19th of March 2015, a law was introduced in France. That law was entitled “loi du renseignement” and was presented in regard of what happened in the Charlie Hebdo awful week. The main idea of the law is to put peole on wire so that terrorist activities could be spotted and dismantled.That law will be voted and apply on the the 5th of May 2015.Although such a law sounds great for counter-terrorism, it has several major issues people should be aware of.You’re on wireEvery people – in France and abroad as well of course – could be on wire. The French government might have access to whatever you do with your internet connection. Crypto-analysts working for the goverment will read dozens of thousands of messages from individuals. They’ll know where you go to and where you go from – think of your smartphone’s GPS capabilities. They’ll know which pictures you take, how much time you spend in your bathroom. They’ll even be able to read your e-mails if they want to.Of course, most people “just don’t care”. “Why should I care that the government knows the litter brand I bought to my cat? I don’t give a damned heck as long as they catch bad people”. That’s a point, but that’s also being blind. Digital communication is not only about yourself. It’s about people. It’s about you and your friends. You, and your contacts. You can’t choose for them whether they’d like people to watch over their shoulders every now and then. If the government has access to your private data, it has access to your friends’ and contacts’ data as well. Not giving a damned heck is being selfish.You’ll be violatedThen comes the issue with what the law is. It gives the government the right to spy on masses. They could even sell the data they collect. As a counter argument, the “Commission de contrôle des techniques de renseignement” – which French stands for “Control commission of intelligence techniques” – was created. That committe is there to watch the government and ensure it doesn’t go out of control and doesn’t violate people’s rights. The issue with that is that our prime minister has the right to ignore the committee’s decision. If the committee says “Wait a minute. You don’t have the right to gather those information without asking for M. Dupont’s approval”, the prime minister may answer back “Well, fuck off. I will and you can’t stop me”. And the sad truth is that… yeah, with that law, the prime minister and their team has the right to ignore the committee’s decision.The committee then has no point. It just gives an opinion. Not a veto. What would happen if a terrorist hacked into your computer. Would you go to jail because the prime minister would have stated you were a terrorist? Damn me if I know.We’re going to lose a rightFrench people will lose a very important right: the right to privacy. It’ll be sacrificied without your opinion for counter-terrorism, giving the government a power it shouldn’t have. You thought the NSA was/is wrong? Sure it is. But when the NSA watches over American and worldwide people, it is illegal whereas when the French government watches over French and worldwide people, it is legal. That makes the French government way worse than the NSA to me.I think the first thing I’ll do will be to revoke my French VPS subscription to move it out of France, in a country in which people still have privacy. Keep your communication encrypted as much as possible (ssh and so on). And as a bad joke, don’t leave your camera in your bedroom. You might be spied on by Manuel Valls while having sex with your girlfriend.

Another use for strace (isatty)

Planet Haskell - 18 hours 19 min ago
(This is a followup to an earlier article describing an interesting use of strace.) A while back I was writing a talk about Unix internals and I wanted to discuss how the ls command does a different display when talking to a terminal than otherwise: ls to a terminal ls not to a terminal How does ls know when it is talking to a terminal? I expect that is uses the standard POSIX function isatty. But how does isatty find out? I had written down my guess. Had I been programming in C, without isatty, I would have written something like this: @statinfo = stat STDOUT; if ( $statinfo[2] & 0060000 == 0020000 && ($statinfo[6] & 0xff) == 5) { say "Terminal" } else { say "Not a terminal" } (This is Perl, written as if it were C.) It uses fstat (exposed in Perl as stat) to get the mode bits ($statinfo[2]) of the inode attached to STDOUT, and then it masks out the bits the determine if the inode is a character device file. If so, $statinfo[6] is the major and minor device numbers; if the major number (low byte) is equal to the magic number 5, the device is a terminal device. On my current computers the magic number is actually 136. Obviously this magic number is nonportable. You may hear people claim that those bit operations are also nonportable. I believe that claim is mistaken. The analogous code using isatty is: use POSIX 'isatty'; if (isatty(STDOUT)) { say "Terminal" } else { say "Not a terminal" } Is isatty doing what I wrote above? Or something else? Let's use strace to find out. Here's our test script: % perl -MPOSIX=isatty -le 'print STDERR isatty(STDOUT) ? "terminal" : "nonterminal"' terminal % perl -MPOSIX=isatty -le 'print STDERR isatty(STDOUT) ? "terminal" : "nonterminal"' > /dev/null nonterminal Now we use strace: % strace -o /tmp/isatty perl -MPOSIX=isatty -le 'print STDERR isatty(STDOUT) ? "terminal" : "nonterminal"' > /dev/null nonterminal % less /tmp/isatty We expect to see a long startup as Perl gets loaded and initialized, then whatever isatty is doing, the write of nonterminal, and then a short teardown, so we start searching at the end and quickly discover, a couple of screens up: ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7ffea6840a58) = -1 ENOTTY (Inappropriate ioctl for device) write(2, "nonterminal", 11) = 11 write(2, "\n", 1) = 1 My guess about fstat was totally wrong! The actual method is that isatty makes an ioctl call; this is a device-driver-specific command. The TCGETS parameter says what command is, in this case “get the terminal configuration”. If you do this on a non-device, or a non-terminal device, the call fails with the error ENOTTY. When the ioctl call fails, you know you don't have a terminal. If you do have a terminal, the TCGETS command has no effects, because it is a passive read of the terminal state. Here's the successful call: ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 write(2, "terminal", 8) = 8 write(2, "\n", 1) = 1 The B38400 opost… stuff is the terminal configuration; 38400 is the baud rate. (In the past the explanatory text for ENOTTY was the mystifying “Not a typewriter”, even more mystifying because it tended to pop up when you didn't expect it. Apparently Linux has revised the message to the possibly less mystifying “Inappropriate ioctl for device”. (SNDCTL_TMR_TIMEBASE is mentioned because apparently someone decided to give their SNDCTL_TMR_TIMEBASE operation, whatever that is, the same numeric code as TCGETS, and strace isn't sure which one is being requested. It's possible that if we figured out which device was expecting SNDCTL_TMR_TIMEBASE, and redirected standard output to that device, that isatty would erroneously claim that it was a terminal.)

ssh, Darcs Hub vulnerability

Planet Haskell - 18 hours 19 min ago
I recently learned of a serious undocumented vulnerability in the ssh package. This is a minimal ssh server implementation used by darcsden to support darcs push/pull. If you use the ssh package, or you have darcsden’s darcsden-ssh server running, you should upgrade to/rebuild with the imminent ssh-0.3 release right away. Or if you know of someone like that, please let them know. darcsden is of course the basis for Darcs Hub. Here’s the announcement I sent to users there a few days ago, with more details. Hello darcs hub users, This is Simon Michael, operator of hub.darcs.net, with the first all-darcs-hub-users announcement. You’re receiving this because you have an email address configured in your darcs hub user settings. Thank you for using darcs hub, and for any feedback/bug reports/patches you may have sent. Usage is growing steadily, and I plan to blog more about it soon at joyful.com. This email is to announce a recently patched security vulnerability in darcs hub’s SSH server. Timeline: 3/21: a software developer reports that the haskell “ssh” library used by darcs hub does not check for a valid signature on the public key during authentication. This means it was possible to authenticate as any other ssh user if you knew their public key. 3/21-: I discuss the issue with a small number of core darcs developers and the ssh author. 3/25: A preliminary fix is deployed. We believe this closed the vulnerability. 4/6: A more comprehensive and tested fix is deployed. 4/15: This announcement is sent to current darcs hub users with valid email addresses (714 of 765 users). 4/20: Public disclosure via blog, haskell mail lists and the issue tracker (darcsden #130). Impact and current status: We believe the vulnerability is now fixed. But we are not cryptographers - I’m sure the new ssh maintainer would welcome any help from some of those. We have no reason to believe anyone discovered or exploited the vulnerability. Also, it seems unlikely there’s anything hosted on darcs hub that would attract this kind of attention. darcs hub logs are not good enough to be certain, however. It’s possible I’ll find a way to be more certain by looking at file timestamps or something. The weakness was present in darcs hub’s ssh server since it went live (and in darcsden.com before that). As mentioned, it was possible to authenticate via ssh as another user if you provided their public ssh key. With ssh access, it’s possible to create, delete, modify or replace any repository in that darcs hub account (but not possible to change user settings in the web app, or to access the system hosting darcshub). The worst-case scenario we’ve imagined is that a motivated attacker could have authenticated as you and replaced your repo with one that looks just like it, but with patches altered or added, any time since you created the repo on darcs hub (or on darcsden.com, if you moved it from there). So if you’re paranoid/careful you may want to check the integrity of your repos, eg by reviewing the repo history (“changes” button on the website, “darcs log [-s] [-v]” at the console). If you have more questions about this, you can contact me (simon@joyful.com) and if necessary Ganesh Sittampalam (ganesh@earth.li) privately. Future plans: Public announcement on 4/20 I’ll add a security section to the darcs hub FAQ Ganesh has stepped up to be maintainer of the ssh package, and will make a new release soon I’ll do a darcsden release not too long after that We’ll need to figure out Darcs hub’s sustainability plan. As it grows and more of you rely on it, so does the need for a revenue stream to allow decent maintenance and oversight. This could be from funding, donations, charging for private repos or something else. Also: Some logistical things to be aware of: this announcement has been sent via MailChimp, and as yet there’s no automatic integration between MailChimp and your settings on hub.darcs.net. remember that darcs hub’s issue tracker is here, and that it does not yet send email notifications - to see replies to an issue, you must visit the issue page. darcs hub’s password recovery emails may not always reach you - if you’re experiencing this, please contribute to #123. Needless to say, I regret the vulnerability and am pleased to have it closed. Of course we are not alone, eg github had their own incident. Thank you very much to all who have been helping with this, especially the original reporter for letting us all know, and Ganesh for providing swift and high quality fixes. 2015-04-20T23:10:00Z 2015-04-20T23:10:00Z

Double blind

Planet Haskell - 18 hours 19 min ago
Whether and why and how to double-blind depends on what we’re trying to do together, what’s just, and what’s humanly achievable—all assumptions hard to pin down even if folks are willing. But here’s how I think about it. Suppose we want to estimate the effect of reading a given paper on a person. Maybe this is because we want to decide whether to publish a paper, to promote its authors, or whatever. In the first sentence of this paragraph, I sneaked in the generic noun phrase “a person” as if there is a given person, but in fact there is a distribution over persons (possibly current and future, real and fictional) that we care about. A typical approach to this estimation task is to feed the paper to persons who we hope can both represent the intended audience (by being a member, modeling the members, or both) and report their experience (by writing a review with evaluation, advice, or both). The estimation cannot be perfect as long as the reviewing panel is not exactly same as the audience. In fact, a reviewer may well not be a typical audience member. For example, it is folklore that a research paper should be written for a first-year graduate student, but most reviewers are not first-year graduate students. This can make sense because a skilled and knowledgable reviewer may be able to model a typical audience member without being one. Still, no reviewer or reviewing process is omniscient (or perfectly rational (or perfectly altruistic)). Reviewers could use help—perhaps in the form of oxygen, coffee, or double-blinding. On this view, I expect (which is not to say that others should expect) double-blinding to help if our intended audience contains lots of people who don’t know the authors and their research trajectories, so that a reviewer could better model how those people would react to the paper. As I said above, the precise audience is hard to pin down, but those people might include first-year graduate students as well as the fictional reader named “model-theoretic semantics” who evaluates a paper for “truth”. (Thanks to Tim Chevalier for prompting me to write this.) 2015-04-20T21:01:55Z

Multiple inheritance, revisited

Planet Haskell - 18 hours 19 min ago
Via @ReifyReflect (Sam Lindley) and @PedalKings. Previously: Multiple inheritance.

Announcing: first release of Stackage CLI (Command Line Tools)

Planet Haskell - 18 hours 19 min ago
We're happy to announce the first release of stackage-cli (Command Line Interface). This project got started by a request in a somewhat unlikely place: a MinGHC issue. We started on this as a way to automate some of the instructions available on stackage.org, but quickly realized there was a lot more potential to make lives of developers even better.To get started, just run cabal update && cabal install stackage-cli. You can see more information on the Github README. In the rest of this blog post, we'll cover some of the motivation for the tool, and directions to see it head in the future.Manage your cabal.configStackage's primary mechanism for setup is to give cabal-install a set of constraints of which package versions to install, via a cabal.config file in your project. Typically, setting up Stackage is a matter of running wget https://www.stackage.org/lts/cabal.config. However, there are a few minor annoyances around this:Windows users may not have wget available (this was the main point of the original MinGHC issue).There's a non-obvious upgrade process, which can require wiping out your old package databaseWith stackage-cli, you just run stackage init to get a cabal.config file. stackage purge deletes that file and wipes your package database. And stackage upgrade does both.SandboxesThe above is nice, but not especially noteworthy. The sandbox feature is where the tool really begins to shine. As many of you know, cabal sandboxes are highly touted for minimizing "cabal hell" problems, by isolating interactions between different projects. However, there are two downsides of sandboxes:They still don't solve the problem of coming up with an initial installation planHaving a separate sandbox for each projects takes a lot of disk space, and requires significant CPU time to get started on a new projectstackage-cli fixes both of those by introducing automated shared sandboxes. The idea is this: when you use a single LTS Haskell version, you're hardcoding your dependency tree. Therefore, multiple projects using the same LTS version can share the same sandbox. To demonstrate this, let me proceed by making fun of Yesod a bit with a shell session:$ yesod init # Answer some questions, get a bunch of output $ cd project1 $ stackage sandbox init Writing a default package environment file to /home/vagrant/Desktop/project1/cabal.sandbox.config Creating a new sandbox at /home/vagrant/.stackage/sandboxes/ghc-7.8.4/lts-2.3 $ cabal install --run-tests # Let's get some coffee # OK, lunch time # Fine, I'll actually go work out today # Still not done??? # OK, done $ cd .. $ yesod init # Start project 2 $ cd project2 $ stackage sandbox init Initializing at snapshot: lts-2.3 Writing a default package environment file to /home/vagrant/Desktop/project2/cabal.sandbox.config Using an existing sandbox located at /home/vagrant/.stackage/sandboxes/ghc-7.8.4/lts-2.3 $ cabal install --run-tests # Wait, it's already configuring # Oh, it's done. Crap, no coffeeThe point of this little demonstration is: you compile your package set once. You then get to reuse it across multiple projects. Yes, the first installation takes just as long and just as much disk space. But subsequent uses will be immediate.By the way, please pay attention to the caveats.Better team collaborationWe use LTS Haskell at FP Complete and on client projects, and it has eliminated problems of incompatible package versions. Our recommendation is to check in the cabal.config file to your repo, or equivalently make sure that everyone on your team starts development by running the same init command, e.g. stackage sandbox init lts-1.15.To sandbox or not to sandbox?We'd typically recommend to start off with sandboxes, and only leave them if you have a good reason. Given that, you may be wondering why we have two versions of the command. One answer is that there are multiple sandboxing technologies already out there. For example, hsenv still provides some functionality that people prefer to cabal sandboxes. Another answer is that there are other sandboxing approaches that have yet to be fully explored, such as Docker containers, which may provide significant advantages. We don't want to tie the tool down to one implementation.PluginsOne final point. If you pay close attention, you may notice that stackage-cli provides a few different executables. We've decided to emulate the Git approach to the command line tool: we have a wrapper executable called stackage (and a shorter abbreviation stk) which will call out to any other tools with the name stackage-*. stackage-cli ships with executables called stackage-init, stackage-purge, stackage-upgrade, and stackage-sandbox. These all work as plugins to the main executable. This provides for a number of nice features:The main stackage-cli can remain light-weightNew functionality can be added easily by other packagesOthers in the community are welcome to release their own Stackage pluginsThe only requirements placed on a Stackage plugin are that it must:Be named stackage-somethingWhen called with the argument --summary, give a short description of its functionalityThe Stackage.CLI module provides some helper functions for this.There are already a few Stackage plugins on Hackage.stackage-update provides a faster, more secure variant of cabal update. With both stackage-cli and stackage-update installed, you now just run stk updatestackage-view is an interactive code explorerstackage-curator is used by the Stackage team to produce Nightly and LTS releasesFuture workWe hope others will join in the fun with both the core stackage-cli tool, and by producing their own plugins. If you have ideas, please bring them up on the mailing list, issue trackers, or elsewhere. We also have plans for further open sourcing of our internally built code bases. We're still fixing up some details, but the tools we've developed have been in production use by our customers for a while now, and we're excited to get them into the community's hands. We're also looking at providing tools to provide greater package download security, and to automate the process of getting a Haskell development environment up and running. Stay tuned.

IDE

Planet Haskell - 18 hours 19 min ago
I think that a good IDE is very much missing. The future of School of Haskell and FP Haskell Center

darcs 2.10.0 release

Planet Haskell - 18 hours 19 min ago
Hi all,The darcs team is pleased to announce the release of darcs 2.10.0.DownloadingThe easiest way to install darcs 2.10.0 from source is by first installing the Haskell Platform (http://www.haskell.org/platform). If you have installed the Haskell Platform or cabal-install, you can install this release by doing:$ cabal update$ cabal install darcs-2.10.0Alternatively, you can download the tarball from http://darcs.net/releases/darcs-2.10.0.tar.gz and build it by hand as explained in the README file.The 2.10 branch is also available as a darcs repository from http://darcs.net/releases/branch-2.10Feedback If you have an issue with darcs 2.10.0, you can report it via the web on http://bugs.darcs.net/ . You can also report bugs by email to bugs at darcs.net, or come to #darcs on irc.freenode.net.What's new since darcs 2.8.5New featuresdarcs rebase: enable deep amending of history (Ganesh Sittampalam)darcs pull --reorder: keep local-only patches on top of mainstream patches (Ale Gadea, Ganesh Sittampalam)darcs dist --zip: generate a zip archive from a repository (Guillaume Hoffmann)patch bundle contexts are minimized by default. Enables bundles to be applied to more repositories. (Guillaume Hoffmann)darcs convert export/import for conversion to/from VCSes supporting the fast-export protocol (Petr Rockai, Owen Stephens, Guillaume Hoffmann, Lele Gaifax, Ben Franksen)darcs test --backoff: exponential backoff test strategy, faster than bisect on big repositories (Michael Hendricks)work normally on sshfs-mounted repositories (Nathaniel Filardo)automatic detection of file/directory moves, and of token replaces (Jose Neder)patience diff algorithm by default (Jose Neder)interactive mode for whatsnew (Dan Frumin)tag --ask-deps to create tags that may not include some patches (Ganesh Sittampalam)User Interfaceadd a last question after all patches have been selected to confirm the whole selection (Florent Becker)command names: clone is the new name of get and putlog is the new name of changesamend is the new name of amend-recordshow output of log into a pager by default (Guillaume Hoffmann)the output of log is more similar to git's: show patch hash in UI (hash of the patch's metadata)put author and date on separate lines (Guillaume Hoffmann)enable to match on patch hash prefix with -h and --hash (Guillaume Hoffmann, Gian Piero Carrubba)better messages: better error messages for http and ssh errors (Ernesto Rodriguez)init, add, remove, move and replace print confirmation messages (Guillaume Hoffmann)rollback only happens in the working copy (Florent Becker, Guillaume Hoffmann)darcs send no longer tries to send a mail by default (Eric Kow)when no patch name given, directly invoke text editor (Jose Neder, Guillaume Hoffmann)use nano as default text editor instead of vi (Guillaume Hoffmann)keep log files for patch name and mail content in _darcs (Ale Gadea)optimize and convert are now supercommands (Guillaume Hoffmann)improve darcs help environment and darcs help markdown (Radoslav Dorcik, Guillaume Hoffmann)warn about duplicate tags when creating a new one (Ale Gadea)allow darcs mv into known, but deleted in working, file (Owen Stephens)improve--not-in-remote, allowing multiple repos and use default (Owen Stephens)Performancefaster darcs diff (Petr Rockai)faster log and annotate thanks to patch index data structure (BSRK Aditya, Benedikt Schmidt, Eric Kow, Guillaume Hoffmann, Ganesh Sittampalam)faster push via ssh by using compression (Ben Franksen)cloning to an ssh destination (formerly darcs put) is more efficient (Guillaume Hoffmann)faster internal representation of patch hashes (Guillaume Hoffmann)when cloning from http, use packs in a more predictable way (Guillaume Hoffmann)store global cache in bucketed format (Marcio Diaz)Developer-relatedrequire and support GHC 7.4 to 7.10 (Ganesh Sittampalam)replace type witness CPP macros with plain Haskell (Eric Kow)hashed-storage is bundled into darcs (Ganesh Sittampalam)replace C SHA256 bindings with external libraries (Ganesh Sittampalam)move the bits of the datetime package we need into Darcs.Util.DateTime (Ganesh Sittampalam)build Darcs once rather than thrice. (Eric Kow)remove home page and manual from darcs' repository (Guillaume Hoffmann)run tests through cabal test (Ryan Desfosses)run fewer darcs-1 related tests in testsuite (Ganesh Sittampalam)Use custom replHook to fix cabal repl (Owen Stephens)darcs.cabal: make Haskell2010 the default-language for all stanzas (Ben Franksen)always compile with mmap support (Ganesh Sittampalam)new options subsystem (Ben Franksen)various cleanups, code restructurations and refactoring, haddocks (Will Langstroth, Owen Stephens, Florent Becker, Guillaume Hoffmann, Michael Hendricks, Eric Kow, Dan Frumin, Ganesh Sittampalam)Issues resolved in Darcs 2.10issue346: implement "patience diff" from bzr (Jose Neder)issue642: Automatic detection of file renames (Jose Neder)issue822: generalized the IO Type for better error messages and exception handling (Ernesto Rodriguez)issue851: interactive mode for whatsnew (Dan Frumin)issue904: Fix record on Linux/FUSE/sshfs (fall back to sloppy locks automatically) (Nathaniel Filardo)issue1066: clone to ssh URL by locally cloning then copying by scp (Guillaume Hoffmann)issue1268: enable to write darcs init x (Radoslav Dorcik)issue1416: put log files in tempdir instead of in working dir (Ale Gadea)issue1514: send --minimize-context flag for send (Guillaume Hoffmann)issue1624: bucketed cache (Marcio Diaz)issue1828: file listing and working --dry-run for mark-conflicts (Guillaume Hoffmann)issue1987: Garbage collection for inventories and patches (Marcio Diaz)issue2181: put cache in $XDG_CACHE_HOME (~/.cache by default) (Guillaume Hoffmann)issue2193: make that finalizeTentativeChanges no longer run tests (Guillaume Hoffmann)issue2198: move repo testing code to Darcs.Repository.Test (Guillaume Hoffmann)issue2200: darcs replace complains if no filepaths given (Owen Stephens)issue2204: do not send mail by default (Eric Kow)issue2237: prevent patch index creation for non-hashed repos (Owen Stephens)issue2235: Accept RFC2822 dates (Dave Love)issue2246: add default boring entry for emacs session save files (Owen Stephens)issue2253: attempting to use the patch index shouldn't create it on OF repos (Owen Stephens)Issue2278: Document default value for --keep-date / --no-keep-date (Mark Stosberg)issue2199: getMatchingTag needs to commute for dirty tags (Ganesh Sittampalam)issue2247: move patch index creation into the job running code (Ganesh Sittampalam)issue2238: let optsModifier remove all occurrences of LookForAdds (Gian Piero Carrubba)issue2236: make 'n' an alias for 'q' in lastregret questions (Gian Piero Carrubba)issue2155: Expurgate the non-functional annotate --xml-output option (Dave Love)issue2248: always clean up rebase-in-progress state (Ganesh Sittampalam)issue2270: fixed darcs changes -i --only-to-files (Sebastian Fischer)issue2282: don't allow remote operations to copy the rebase patch (Ganesh Sittampalam)issue2287: obliterate -O doesn't overwrite existing file. (Radoslav Dorcik)issue2227: move the rebase patch to the end before an amend-record (Ganesh Sittampalam)issue2277: rebase suspend and unsuspend supports --summary. (Radoslav Dorcik)issue2311: posthook for 'get' should run in created repo (Sebastian Fischer)issue2312: posthooks for 'record' and 'amend-record' should receive DARCS_PATCHES (Sebastian Fischer)issue2163: new option for amend, select author for patch stealing. (Radoslav Dorcik)issue2321: when no patch name given, directly invoke text editor (Jose Neder)issue2320: save prompted author name in ~/.darcs/author instead of ./_darcs/prefs/author (Jose Neder)issue2250: tabbing in usageHelper - pad by max length of command name (BSRK Aditya)issue2309: annotate includes line numbers (Owen Stephens)issue2334: fix win32 build removing file permission functions (Guillaume Hoffmann)issue2343: darcs amend-record does not record my change (Jose Neder)issue2335: one liner when adding tracked files if not verbose (Guillaume Hoffmann)issue2313: whatsnew -l: Stack space overflow (Jose Neder)issue2347: fix amend-record --prompt-long-comment (Guillaume Hoffmann)issue2348: switch to cabal's test framework (Ryan Desfosses)issue2209: Automatically detect replace (Jose Neder)issue2332: ignore case of characters in prompt (Guillaume Hoffmann)issue2263: Option --set-scripts-executable is not properly documented (Ale Gadea)issue2367: rename amend-record to amend, make --unrecord more visible (Guillaume Hoffmann)issue2345: solution using cabal's checkForeignDeps (Dan Frumin)issue2357: switching to regex-compat-tdfa for unicode support (Dan Frumin)issue2379: only use packs to copy pristine when up-to-date (Guillaume Hoffmann)issue2365: correctly copy pristine in no-working-dir clones (Guillaume Hoffmann)issue2244: darcs tag should warn about duplicate tags (Ale Gadea)issue2364: don't break list of 'bad sources' (Sergei Trofimovich)issue2361: optimize --reorder runs forever with one repository (Ale Gadea)issue2364: fix file corruption on double fetch (Sergei Trofimovich)issue2394: make optimize a supercommand (Guillaume Hoffmann)issue2396: make convert a supercommand and enhance help strings (Guillaume Hoffmann)issue2314: output-auto-name in defaults file (Ben Franksen)issue2388: check if inventories dir has been created (Owen Stephens)issue2249: Rename isFile to isValidLocalPath and WorkRepoURL to WorkRepoPossibleURL (Mateusz Lenik)issue2153: allow skipping backwards through depended-upon patches (Andreas Brandt)issue2380: allow darcs mv into known, but deleted in working, file (Owen Stephens)issue2403: need to avoid moving the rebase patch to the end (Ganesh Sittampalam)issue2409: implement darcs rebase apply (Ganesh Sittampalam)issue2385: invoke pager without temporary file (Guillaume Hoffmann)issue2333: better error message when pushing and darcs not in path (Ben Franksen)Known issuesThese are known new issues in darcs 2.10.0:issue2269: rebase should warn about stolen patches at suspend, not unsuspendissue2272: darcs rebase unsuspend should automate or semi-automate handling unrecorded changesissue2276: darcs rebase unsuspend needs UI improvements for "You are not... Amend anyway?"issue2359: convert --export mishandles Unicode filenamesissue2372: Please remove "HINT: I could not reach..." messageissue2423: diff only respecting --diff-command when a diff.exe is presentissue2436: rollback --patches takes ages before first promptissue2445: internal error if suspended patch is pulled into repository againissue2449: test harness/shelly: need to handle mis-encoded/binary data

Another use for strace (groff)

Planet Haskell - 18 hours 19 min ago
The marvelous Julia Evans is always looking for ways to express her love of strace and now has written a zine about it. I don't use strace that often (not as often as I should, perhaps) but every once in a while a problem comes up for which it's not only just the right thing to use but the only thing to use. This was one of those times. I sometimes use the ancient Unix drawing language pic. Pic has many good features, but is unfortunately coupled too closely to the Roff family of formatters (troff, nroff, and the GNU project version, groff). It only produces Roff output, and not anything more generally useful like SVG or even a bitmap. I need raw images to inline into my HTML pages. In the past I have produced these with a jury-rigged pipeline of groff, to produce PostScript, and then GNU Ghostscript (gs) to translate the PostScript to a PPM bitmap, some PPM utilities to crop and scale the result, and finally ppmtogif or whatever. This has some drawbacks. For example, gs requires that I set a paper size, and its largest paper size is A0. This means that large drawings go off the edge of the “paper” and gs discards the out-of-bounds portions. So yesterday I looked into eliminating gs. Specifically I wanted to see if I could get groff to produce the bitmap directly. GNU groff has a -Tdevice option that specifies the "output" device; some choices are -Tps for postscript output and -Tpdf for PDF output. So I thought perhaps there would be a -Tppm or something like that. A search of the manual did not suggest anything so useful, but did mention -TX100, which had something to do with 100-DPI X window system graphics. But when I tried this groff only said: groff: can't find `DESC' file groff:fatal error: invalid device `X100` The groff -h command said only -Tdev use device dev. So what devices are actually available? strace to the rescue! I did: % strace -o /tmp/gr groff -Tfpuxhpx and then a search for fpuzhpx in the output file tells me exactly where groff is searching for device definitions: % grep fpuzhpx /tmp/gr execve("/usr/bin/groff", ["groff", "-Tfpuzhpx"], [/* 80 vars */]) = 0 open("/usr/share/groff/site-font/devfpuzhpx/DESC", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/groff/1.22.2/font/devfpuzhpx/DESC", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/lib/font/devfpuzhpx/DESC", O_RDONLY) = -1 ENOENT (No such file or directory) I could then examine those three directories to see if they existed, and if so find out what was in them. Without strace here, I would be reduced to groveling over the source, which in this case is likely to mean trawling through the autoconf output, and that is something that nobody wants to do. [ Addendum 20150421: another article about strace. ]

Safe concurrent MySQL access in Haskell

Planet Haskell - 18 hours 19 min ago
mysql, Bryan O’Sullivan’s low-level Haskell bindings to the libmysqlclient C library, powers a few popular high-level MySQL libraries, including mysql-simple, persistent-mysql, snaplet-mysql-simple, and groundhog-mysql. Most users do not suspect that using mysql as it stands concurrently is unsafe. This article describes the issues and their solutions. Issue 1: unsafe foreign calls As of version 0.1.1.8, mysql marks many of its ffi imports as unsafe. This is a common trick to make these calls go faster. In our case, the problem with unsafe calls is that they block a capability (that is, an OS thread that can execute Haskell code). This is bad for two reasons: Fewer threads executing Haskell code may result in less multicore utilization and degraded overall performance. If all capabilities get blocked executing related MySQL statements, they may deadlock. Here’s a demonstration of such a deadlock: {-# LANGUAGE OverloadedStrings #-} import Database.MySQL.Simple import Control.Concurrent import Control.Concurrent.STM import Control.Applicative import Control.Monad import Control.Exception main = do tv <- atomically $ newTVar 0 withConn $ \conn -> do mapM_ (execute_ conn) [ "drop table if exists test" , "create table test (x int)" , "insert into test values (0)" ] forM_ [1..2] $ \n -> forkIO $ withConn $ \conn -> (do execute_ conn "begin" putStrLn $ show n ++ " updating" execute_ conn "update test set x = 42" putStrLn $ show n ++ " waiting" threadDelay (10^6) execute_ conn "commit" putStrLn $ show n ++ " committed" ) `finally` (atomically $ modifyTVar tv (+1)) atomically $ check =<< (>=2) <$> readTVar tv where withConn = bracket (connect defaultConnectInfo) close If you run this with stock mysql-0.1.1.8, one capability (i.e. without +RTS -Nx), and either threaded or non-threaded runtime, you’ll see: 1 updating 1 waiting 2 updating 1 committed test: ConnectionError { errFunction = "query", errNumber = 1205, errMessage = "Lock wait timeout exceeded; try restarting transaction"} Here’s what’s going on: Both threads are trying to update the same row inside their transactions; MySQL lets the first update pass but blocks the second one until the first update committed (or rolled back); The first transaction never gets a chance to commit, because it has no OS threads (capabilities) to execute on. The only capability is blocked waiting for the second UPDATE to finish. The solution is to patch mysql to mark its ffi calls as safe (and use the threaded runtime). Here’s what would happen: To compensate for the blocked OS thread executing the second UPDATE, the GHC runtime moves the capability to another thread (either fresh or drawn from a pool); The first transaction finishes on this unblocked capability; MySQL then allows the second UPDATE to go through, and the second transaction finishes as well. Issue 2: uninitialized thread-local state in libmysqlclient To quote the docs: When you call mysql_init(), MySQL creates a thread-specific variable for the thread that is used by the debug library (among other things). If you call a MySQL function before the thread has called mysql_init(), the thread does not have the necessary thread-specific variables in place and you are likely to end up with a core dump sooner or later. Here’s the definition of the thread-local state data structure, taken from mariadb-10.0.17: struct st_my_thread_var { int thr_errno; mysql_cond_t suspend; mysql_mutex_t mutex; mysql_mutex_t * volatile current_mutex; mysql_cond_t * volatile current_cond; pthread_t pthread_self; my_thread_id id; int volatile abort; my_bool init; struct st_my_thread_var *next,**prev; void *keycache_link; uint lock_type; /* used by conditional release the queue */ void *stack_ends_here; safe_mutex_t *mutex_in_use; #ifndef DBUG_OFF void *dbug; char name[THREAD_NAME_SIZE+1]; #endif }; This data structure is used by both server and client code, although it seems like most of these fields are used by the server, not client (with the exception of the dbug thing), which would explain why Haskellers have gotten away with not playing by the rules so far. However: I am not an expert, and I spent just about 20 minutes grepping the codebase. Am I sure that there’s no code path in the client that accesses this? No. Am I going to ignore the above warning and bet the stability of my production system on MySQL/MariaDB devs never making use of this thread-local state? Hell no! What should we do to obey the rules? First, make threads which work with MySQL bound, i.e. launch them with forkOS instead of forkIO. Otherwise, even if an OS thread is initialized, the Haskell thread may be later scheduled on a different, uninitialized OS thread. If you create a connection in a thread, use it, and dispose of it, then using a bound thread should be enough. This is because mysql’s connect calls mysql_init, which in turn calls mysql_thread_init. However, if you are using a thread pool or otherwise sharing a connection between threads, then connect may occur on a different OS thread than a subsequent use. Under this scenario, every thread needs to call mysql_thread_init prior to other MySQL calls. Issue 3: non-thread-safe calls The mysql_library_init function needs to be called prior to any other MySQL calls. It only needs to be called once per process, although it is harmless to call it more than once. It is called implicitly by mysql_init (which is in turn called by connect). However, this function is documented as not thread-safe. If you connect from two threads simultaneously, bad or unexpected things can happen. Also, if you are calling mysql_thread_init as described above, it should be called after mysql_library_init. This is why it is a good idea to call mysql_library_init in the very beginning, before you spawn any threads. Using a connection concurrently This is not specific to the Haskell bindings, just something to be aware of: You should not use the same MySQL connection simultaneously from different threads. First, the docs explicitly warn you about that: Multiple threads cannot send a query to the MySQL server at the same time on the same connection (there are some details on this in case you are interested) Second, the MySQL wire protocol is not designed to multiplex several communication «threads» onto the same TCP connection (unlike, say, AMQP), and trying to do so will probably confuse both the server and the client. Example Here is, to the best of my knowledge, a correct example of concurrently accessing a MySQL database. The example accepts request at http://localhost/key and looks up that key in a MySQL table. It needs to be compiled against my fork of mysql, which has the following changes compared to 0.1.1.8: Unsafe calls are marked as safe (the patch is due to Matthias Hörmann); mysql_library_init and mysql_thread_init are exposed under the names initLibrary and initThread. (How to use a fork that is not on hackage? For example, through a stackage snapshot.) {-# LANGUAGE OverloadedStrings, RankNTypes #-} import Network.Wai import qualified Network.Wai.Handler.Warp as Warp import Network.HTTP.Types import qualified Database.MySQL.Base as MySQL import Database.MySQL.Simple import Control.Exception (bracket) import Control.Monad (void) import Control.Concurrent (forkOS) import qualified Data.Text.Lazy.Encoding as LT import Data.Pool (createPool, destroyAllResources, withResource) import Data.Monoid (mempty) import GHC.IO (unsafeUnmask) main = do MySQL.initLibrary bracket mkPool destroyAllResources $ \pool -> Warp.runSettings (Warp.setPort 8000 . Warp.setFork forkOSWithUnmask $ Warp.defaultSettings) $ \req resp -> do MySQL.initThread withResource pool $ \conn -> case pathInfo req of [key] -> do rs <- query conn "SELECT `desc` FROM `test` WHERE `key` = ?" (Only key) case rs of Only result : _ -> resp $ responseLBS ok200 [(hContentEncoding, "text/plain")] (LT.encodeUtf8 result) _ -> resp e404 _ -> resp e404 where mkPool = createPool (connect defaultConnectInfo) close 1 60 10 e404 = responseLBS notFound404 [] mempty forkOSWithUnmask :: ((forall a . IO a -> IO a) -> IO ()) -> IO () forkOSWithUnmask io = void $ forkOS (io unsafeUnmask) The forkWithUnmask business is only an artifact of the way warp spawns threads; normally a simple forkOS would do. On the other hand, this example shows that in the real world you sometimes need to make an extra effort to have bound threads. Even warp got this feature only recently. Note that this isn’t the most efficient implementation, since it essentially uses OS threads instead of lightweight Haskell threads to serve requests. On destructors The *_init functions allocate memory, so there are complementary functions, mysql_thread_end and mysql_library_end, which free that library. However, you probably do not want to call them. Here’s why. Most multithreaded Haskell programs have a small numbers of OS threads managed by the GHC runtime. These threads are also long-lived. Trying to free the resources associated with those threads won’t give much, and not doing so won’t do any harm. Furthermore, suppose that you still want to free the resources. When should you do so? Naively calling mysql_thread_end after serving a request would be wrong. It is only the lightweight Haskell thread that is finishing. The OS thread executing the Haskell thread may be executing other Haskell threads at the same time. If you suddenly destroy MySQL’s thread-local state, the effect on other Haskell threads would be the same as if you didn’t call mysql_thread_init in the first place. And calling mysql_library_end without mysql_thread_end makes MySQL upset when it sees that not all threads have ended. References GitHub issue bos/mysql#11: Address concurrency Leon P Smith: Concurrency And Foreign Functions In The Glasgow Haskell Compiler Edward Z. Yang: Safety first: FFI and threading Simon Marlow, Simon Peyton Jones, Wolfgang Thaller: Extending the Haskell Foreign Function Interface with Concurrency MySQL 5.6 Reference Manual: Writing C API Threaded Client Programs 2015-04-17T20:00:00Zhttp://ro-che.info/articles/2015-04-17-safe-concurrent-mysql-haskell

Announcing stackage-update

Planet Haskell - 18 hours 19 min ago
I just released a simple tool to Hackage called stackage-update. Instead of repeating myself, below is a copy-paste of the README.md from the Github repository.This package provides an executable, stackage-update, which provides the same functionality as cabal update (it updates your local package index). However, instead of downloading the entire package index as a compressed tarball over insecure HTTP, it uses git to incrementally update your package list, and downloads over secure HTTPS.It has minimal Haskell library dependencies (all dependencies are shipped with GHC itself) and only requires that the git executable be available on the PATH. It builds on top of the all-cabal-files repository.AdvantagesVersus standard cabal update, using stackage-update gives the following advantages:Only downloads the deltas from the last time you updated your index, threby requiring significantly less bandwidthDownloads over a secure HTTPS connection instead of an insecure HTTP connectionNote that the all-cabal-files repo is also updated from Hackage over a secure HTTPS connectionUsageInstall from Hackage as usual with:cabal update cabal install stackage-updateFrom then on, simply run stackage-update instead of cabal update.Why stackage?You may be wondering why this tool is called stackage-update, when in fact the functionality is useful outside of the Stackage project itself. The reason is that the naming allows it to play nicely with the other Stackage command line tooling. Concretely, that means that if you have stackage-cli installed, stackage-update works as a plugin. However, you can certainly use stackage-update on its own without any other tooling or dependencies on the Stackage project.Future enhancementsIf desired, add support for GPG signature checking when cloning/pulling from the all-caba-files repo

Mid/Senior Software Development Engineer at Lookingglass Cyber Solutions (Full-time)

Planet Haskell - 18 hours 19 min ago
Lookingglass is the world leader in cyber threat intelligence management. We collect and process all source intelligence, connecting organizations to valuable information through its cyber threat intelligence monitoring and management platform. Our solutions allow customers to continuously monitor threats far and near, such as the presence of botnets, hosts associated with cybercriminal networks, unexpected route changes and the loss of network resiliency. We are seeking qualified Software Development Engineers to join our team! Required Skills & Experience: US Citizen or Green Card Holder Location: MD/VA based Bachelor’s or Masters degree in: computer science, engineering, information systems or mathematics 3-5 yrs experienced with full development life-cycle with shipping products 2+ yrs experienced with Functional and OO languages – have worked in functional paradigms with immutable data models 3+ yrs building distributed system products including messaging, NoSQL, RPC / RMI mechanisms – Including building Service Orientated Architectures Proficiency with data structure and algorithm analysis Experienced working in a TDD Environment Nice to Have: Product development experience in network security, content security or cyber threat intelligence Experience with concurrency models Experience with key-value distributed databases Experience deploying production software in Haskell, OCAML, Clojure, or Lisp Comfortable writing one or more of the following Javascript, GoLang, Ruby At Lookingglass, we believe our employees are our greatest assets. We offer competitive salaries, a full benefits package, with available medical, dental, vision, and disability insurance, a 401k retirement package, and stock options. We offer generous PTO, a well supplied kitchen, and regular team activities. Most importantly, we offer the opportunity to build a world-class product with a team of talented engineers. Get information on how to apply for this position. 2015-04-16T21:46:48Z

Senior/Principal Software Development Engineer at Lookingglass Cyber Solutions (Full-time)

Planet Haskell - 18 hours 19 min ago
Are you an experienced software engineer in security, networking, cloud and big data? Are you interested in cyber security or improving the security of the Internet? Do you push yourself to be creative and innovative and expect the same of others? At Lookingglass, we are driven and passionate about what we do. We believe that teams deliver great products not individuals. We inspire each other and our customers every day with technology that improves the security of the Internet and of our customer’s. Behind our success is a team that thrives on collaboration and creativity, delivering meaningful impact to our customers. We are currently seeking qualified Senior/Principal Software Development Engineers to join our team! Required Skills & Experience: US Citizen or Green Card Holder Location: MD/VA or CA based Bachelor’s or Masters degree in:computer science, engineering, information systems or mathematics Strong technical leader with proven experience leading project technologies and mentoring junior development team members 7-10 yrsexperienced with full development life-cycle with shipping products 4-8yrs experienced with Functional and OO languages – have worked in functional paradigms with immutable data models 5+ yrs building distributed system products including messaging, NoSQL, RPC / RMI mechanisms, and Service Orientated Architectures Proficiency with data structure, algorithm analysis, and concurrency programming Experienced working in a TDD Environment Comfortable with aggressive refactoring Architectural and design skills to map a solution across hardware, software, and business layers in a distributed architecture Nice to Have: Product development experience in network security, content security or cyber threat intelligence Experience with CSP concurrency models Experience with key-value distributed databases Experience deploying production software in Haskell, OCAML, Clojure, or Lisp Polyglot programmer with production experience in imperative, declarative, OO, functional, strongly/weakly typed, static/dynamic, interpreted/compiled languages Lookingglass believes our employees are our greatest assets. We offer competitive salaries, a full benefits package, with available medical, dental, vision, and disability insurance, a 401k retirement package, and stock options. We offer generous PTO, a well supplied kitchen, and regular team activities. Most importantly, we offer the opportunity to build a world-class product with a team of talented engineers. Get information on how to apply for this position. 2015-04-16T21:46:47Z

Blame and Coercion:Together Again for the First Time

Planet Haskell - 18 hours 19 min ago
Blame and Coercion:Together Again for the First TimeJeremy Siek, Peter Thiemann, Philip Wadler.PLDI, June 2015.C#, Dart, Pyret, Racket, TypeScript, VB: many recent languages integrate dynamic and static types via gradual typing. We systematically develop three calculi for gradual typing and the relations between them, building on and strengthening previous work. The calculi are: λB, based on the blame calculus of Wadler and Findler (2009); λC, inspired by the coercion calculus of Henglein (1994); λS inspired by the space-efficient calculus of Herman, Tomb, and Flanagan (2006) and the threesome calculus of Siek and Wadler (2010). While λB is little changed from previous work, λC and λS are new. Together, λB, λC, and λS provide a coherent foundation for design, implementation, and optimisation of gradual types.We define translations from λB to λC and from λC to λS. Much previous work lacked proofs of correctness or had weak correctness criteria; here we demonstrate the strongest correctness criterion one could hope for, that each of the translations is fully abstract. Each of the calculi reinforces the design of the others: λC has a particularly simple definition, and the subtle definition of blame safety for λB is justified by the simple definition of blame safety for λC. Our calculus λS is implementation-ready: the first space-efficient calculus that is both straightforward to implement and easy to understand. We give two applications: first, using full abstraction from λC to λS to validate the challenging part of full abstraction between λB and λC; and, second, using full abstraction from λB to λS to easily establish the Fundamental Property of Casts, which required a custom bisimulation and six lemmas in earlier work.

A complement to blame

Planet Haskell - 18 hours 19 min ago
A complement to blamePhilip Wadler.SNAPL, May 2015.Contracts, gradual typing, and hybrid typing all permit less-precisely typed and more-precisely typed code to interact. Blame calculus encompasses these, and guarantees blame safety: blame for type errors always lays with less-precisely typed code. This paper serves as a complement to the literature on blame calculus: it elaborates on motivation, comments on the reception of the work, critiques some work for not properly attending to blame, and looks forward to applications. No knowledge of contracts, gradual typing, hybrid typing, or blame calculus is assumed.

Improving Hackage security

Planet Haskell - 18 hours 19 min ago
The members of the Industrial Haskell Group are funding work to improve the security of packages downloaded from Hackage. The IHG members agreed to fund this task some time ago and my colleague Austin and I have been working on the design and implementation. In this post I will explain the problem and [...]

OverloadedRecordFields revived

Planet Haskell - 18 hours 19 min ago
Way back in the summer of 2013, with support from the Google Summer of Code programme, I implemented a GHC extension called OverloadedRecordFields to address the oft-stated desire to improve Haskell’s record system. This didn’t get merged into GHC HEAD at the time, because the implementation cost outweighed [...]
Syndicate content