Functional Programming

Luminance – framebuffers and textures

Planet Haskell - 20 hours 32 min ago
I’m happily suprised that so many Haskell people follow luminance! First thing first, let’s tell you about how it grows.Well, pretty quickly! There’s – yet – no method to make actual renders, because I’m still working on how to implement some stuff (I’ll detail that below), but it’s going toward the right direction!FramebuffersSomething that is almost done is the framebuffer part. The main idea of framebuffers – in OpenGL – is supporting offscreen renders, so that we can render to several framebuffers and combine them in several fancy ways. Framebuffers are often bound textures, used to pass the rendered information around, especially to shaders, or to get the pixels through texture reads CPU-side.The thing is… OpenGL’s framebuffers are tedious. You can have incomplete framebuffers if you don’t attach textures with the right format, or to the wrong attachment point. That’s why the framebuffer layer of luminance is there to solve that.In luminance, a Framebuffer rw c d is a framebuffer with two formats. A color format, c, and a depth format, d. If c = (), then no color will be recorded. If d = (), then no depth will be recorded. That enables the use of color-only or depth-only renders, which are often optimized by GPU. It also includes a rw type variable, which has the same role as for Buffer. That is, you can have read-only, write-only or read-write framebuffers.And of course, all those features – having a write-only depth-only framebuffer for instance – are set through… types! And that’s what is so cool about how things are handled in luminance. You just tell it what you want, and it’ll create the required state and manage it for you GPU-side.TexturesThe format types are used to know which textures to create and how to attach them internally. The textures are hidden from the interface so that you can’t mess with them. I still need to find a way to provide some kind of access to the information they hold, in order to use them in shaders for instance. I’d love to provide some kind of monoidal properties between framebuffers – to mimick gloss Monoid instance for its Picture type, basically.You can create textures, of course, by using the createTexture w h mipmaps function. w is the width, h the height of the texture. mipmaps is the number of mipmaps you want for the texture.You can then upload texels to the texture through several functions. The basic form is uploadWhole tex autolvl texels. It takes a texture tex and the texels to upload to the whole texture region. It’s your responsibility to ensure that you pass the correct number of texels. The texels are represented with a polymorphic type. You’re not bound to any kind of textures. You can pass a list of texels, a Vector of texels, or whatever you want, as long as it’s Foldable.It’s also possible to fill the whole texture with a single value. In OpenGL slang, such an operation is often called clearing – clearing a buffer, clearing a texture, clearing the back buffer, and so on. You can do that with fillWhole.There’re two over functions to work with subparts of textures, but it’s not interesting for the purpose of that blog entry.Pixel formatThe cool thing is the fact I’ve unified pixel formats. Textures and framebuffers share the same pixel format type (Format t c). Currently, they’re all phantom types, but I might unify them further and use DataKinds to promote them to the type-level. A format has two type variables, t and c.t is the underlying type. Currently, it can be either Int32, Word32 or Float. I might add support for Double as well later on.c is the channel type. There’re basically five channel types:CR r, a red channel ;CRG r g, red and green channels ;CRGB r g b, red, green and blue channels ;CRGBA r g b a, red, green, blue and alpha channels ;CDepth d, a depth channel (special case of CR; for depths only).The type variables r, g, b, a and d represent channel sizes. There’re currently three kind of channel sizes:C8, for 8-bit ;C16, for 16-bit ;C32, for 32-bit.Then, Format Float (CR C32) is a red channel, 32-bit float – the OpenGL equivalent is R32F. Format Word32 (CRGB C8 C8 C16) is a RGB channel with red and green 8-bit unsigned integer channels and the blue one is a 16-bit unsigned integer channel.Of course, if a pixel format doesn’t exist on the OpenGL part, you won’t be able to use it. Typeclasses are there to enforce the fact pixel format can be represented on the OpenGL side.Next stepsCurrently, I’m working hard on how to represent vertex formats. That’s not a trivial task, because we can send vertices to OpenGL as interleaved – or not – arrays. I’m trying to design something elegant and safe, and I’ll keep you informed when I finally get something. I’ll need to find an interface for the actual render command, and I should be able to release something we can actually use!By the way, some people already tried it (Git HEAD), and that’s amazing! I’ve created the unstable branch so that I can push unstable things, and keep the master branch as clean as possible.Keep the vibe, and have fun hacking around!

Lightweight Checked Exceptions in Haskell

Planet Haskell - 20 hours 32 min ago
Consider this function from the http-conduit library: ``` {.haskell} -- | Download the specified URL (..) -- -- This function will 'throwIO' an 'HttpException' for (..) simpleHttp :: MonadIO m => String -> m ByteString ``` Notice that part of the semantics of this function—that it may throw an HttpException—is [...]

S3 Hackage mirror for Travis builds

Planet Haskell - 20 hours 32 min ago
Yesterday, I noticed a bunch of Travis build failures with the cabal error message "does not exist." As I covered last month, this is typically due to a download failure when trying to install packages. This leaves some ambiguity as to where the download failure originated from: was the client (in this case, Travis) having network issues, or was there a problem on Hackage's end? (Or, of course, any one of dozens of other possible explanations.)I have no hard data to prove that the failure is from the Hackage side, but anecdotal evidence suggests that it's Hackage. In particular, Gregory Collins reports that the Snap build bot is also having issues. When we had these issues in FP Haskell Center, we resolved it by switching to an S3 mirror, so I've decided to start migrating my Travis jobs over to this mirror as well. This is fairly straightforward. First, add a new script to your repo containing:#!/bin/sh set -eux mkdir -p $HOME/.cabal cat > $HOME/.cabal/config <

Parallel/Pipelined Conduit

Planet Haskell - 20 hours 32 min ago
Summary: I wrote a Conduit combinator which makes the upstream and downstream run in parallel. It makes Hoogle database generation faster.The Hoogle database generation parses lines one-by-one using haskell-src-exts, and then encodes each line and writes it to a file. Using Conduit, that ends up being roughly:parse =$= writeConduit ensures that parsing and writing are interleaved, so each line is parsed and written before the next is parsed - ensuring minimal space usage. Recently the FP Complete guys profiled Hoogle database generation and found each of these pieces takes roughly the same amount of time, and together are the bottleneck. Therefore, it seems likely that if we could parse the next line while writing the previous line we should be able to speed up database generation. I think of this as analogous to CPU pipelining, where the next instruction is decoded while the current one is executed.I came up with the combinator: pipelineC :: Int -> Consumer o IO r -> Consumer o IO rAllowing us to write: parse =$= pipelineC 10 writeGiven a buffer size 10 (the maximum number of elements in memory simultaneously), and a Consumer (write), produce a new Consumer which is roughly the same but runs in parallel to its upstream (parse).The ResultWhen using 2 threads the Hoogle 5 database creation drops from 45s to 30s. The CPU usage during the pipelined stage hovers between 180% and 200%, suggesting the stages are quite well balanced (as the profile suggested). The parsing stage is currently a little slower than the writing, so a buffer of 10 is plenty - increasing the buffer makes no meaningful difference. The reason the drop in total time is only by 33% is that the non-pipelined steps (parsing Cabal files, writing summary information) take about 12s.Note that Hoogle 5 remains unreleased, but can be tested from the git repo and will hopefully be ready soon.The CodeThe idea is to run the Consumer on a separate thread, and on the main thread keep pulling elements (using await) and pass them to the other thread, without blocking the upstream yield. The only tricky bit is what to do with exceptions. If the consumer thread throws an exception we have to get that back to the main thread so it can be dealt with normally. Fortunately async exceptions fit the bill perfectly. The full code is:pipelineC :: Int -> Consumer o IO r -> Consumer o IO rpipelineC buffer sink = do sem <- liftIO $ newQSem buffer -- how many are in flow, to avoid excess memory chan <- liftIO newChan -- the items in flow (type o) bar <- liftIO newBarrier -- the result type (type r) me <- liftIO myThreadId liftIO $ flip forkFinally (either (throwTo me) (signalBarrier bar)) $ do runConduit $ (whileM $ do x <- liftIO $ readChan chan liftIO $ signalQSem sem whenJust x yield return $ isJust x) =$= sink awaitForever $ \x -> liftIO $ do waitQSem sem writeChan chan $ Just x liftIO $ writeChan chan Nothing liftIO $ waitBarrier barWe are using a channel chan to move elements from producer to consumer, a quantity semaphore sem to limit the number of items in the channel, and a barrier bar to store the return result (see about the barrier type). On the consumer thread we read from the channel and yield to the consumer. On the main thread we awaitForever and write to the channel. At the end we move the result back from the consumer thread to the main thread. The full implementation is in the repo.EnhancementsI have specialised pipelineC for Consumers that run in the IO monad. Since the Consumer can do IO, and the order of that IO has changed, it isn't exactly equivalent - but relying on such IO timing seems to break the spirit of Conduit anyway. I suspect pipelineC is applicable in some other moands, but am not sure which (ReaderT and ResourceT seem plausible, StateT seems less likely).Acknowledgements: Thanks to Tom Ellis for helping figure out what type pipelineC should have. 2015-07-30T06:00:00Z Neil Mitchell noreply@blogger.com

GHC Weekly News - 2015/07/29

Planet Haskell - 20 hours 32 min ago
Hi *, Welcome for the latest entry in the GHC Weekly News. Today GHC HQ met to discuss plans post-7.10.2. GHC 7.10.2 release GHC 7.10.2 has been ​released! Feel free to grab a ​tarball and enjoy! See the ​release notes for discussion of what has changed. As always, if you suspect that you have found a regression don't hesitate to ​open a Trac ticket. We are especially interested in performance regressions with fairly minimal reproduction cases. GHC 7.10.2 and the text package A few days ago a report came in of long compilations times under 7.10.2 on a program with many Text literals (#10528). This ended up being due to a change in the simplifier which caused it to perform rule rewrites on the left-hand-side of other rules. While this is questionable (read "buggy") behavior, it doesn't typically cause trouble so long as rules are properly annotated with phase control numbers to ensure they are performed in the correct order. Unfortunately, it turns out that the rules provided by the text package for efficiently handling string literals did not include phase control annotations. This resulted in a rule from base being performed on the literal rules, which rendered the literal rules ineffective. The simplifier would then expend a great deal of effort trying to simplify the rather complex terms that remained. Thankfully, the ​fix is quite straightforward: ensure that the the text literal rules fire in the first simplifier phase (phase 2). This avoids interference from the base rules, allowing them to fire as expected. This fix is now present in text-1.2.1.2. Users of GHC 7.10.2 should be use this release if at all possible. Thanks to text's maintainer, Bryan O'Sullivan for taking time out of his vacation to help me get this new release out. While this mis-behaviour was triggered by a bug in GHC, a similar outcome could have arisen even without this bug. This highlights the importance of including phase control annotations on INLINE and RULE pragmas: Without them the compiler may choose the rewrite in an order that you did not anticipate. This has also drawn attention to a few shortcomings in the current rewrite rule mechanism, which lacks the expressiveness to encode complex ordering relationships between rules. This limitation pops up in a number of places, including when trying to write rules on class-overloaded functions. Simon Peyton Jones is currently pondering possible solutions to this on #10595. StrictData This week we merged the long-anticipated -XStrictData extension (​Phab:D1033) by Adam Sandberg Ericsson. This implements a subset of the [StrictPragma] proposal initiated by Johan Tibell.In particular, StrictData allows a user to specify that datatype fields should be strict-by-default on a per-module basis, greatly reducing the syntactic noise introduced by this common pattern. In addition to implementing a useful feature, the patch ended up being a nice clean-up of the GHC's handling of strictness annotations. What remains of this proposal is the more strong -XStrict extension which essentially makes all bindings strict-by-default. Adam has indicated that he may take up this work later this summer. AMP-related performance regression In late May Herbert Valerio Riedel opened ​Phab:D924, which removed an explicit definition for mapM in the [] Traversable instance, as well as redefined mapM_ in terms of traverse_ to ​bring consistency with the post-AMP world. The patch remains unmerged, however, due to a failing ghci testcase. It turns out the regression is due to the redefinition of mapM_, which uses (*>) where (>>) was once used. This tickles poor behavior in ghci's ByteCodeAsm module. The problem can be resolved by defining (*>) = (>>) in the Applicative Assembler instance (e.g. ​Phab:1097). That being said, the fact that this change has already exposed performance regressions raises doubts as to whether it is prudent. GHC Performance work Over the last month or so I have been working on nailing down a variety of performance issues in GHC and the code it produces. This has resulted in a number of patches which in some cases dramatically improve compilation time (namely ​Phab:1012 and ​Phab:D1041). Now since 7.10.2 is out I'll again be spending most of my time on these issues. We have heard a number of reports that GHC 7.10 has regressed on real-world programs. If you have a reproducible performance regression that you would like to see addressed please open a Trac ticket. Merged patches ​Phab:D1028: Fixity declarations are now allowed for infix data constructors in GHCi (thanks to Thomas Miedema) ​Phab:D1061: Fix a long-standing correctness issue arising when pattern matching on floating point values ​Phab:D1085: Allow programs to run in environments lacking iconv (thanks to Reid Barton) ​Phab:D1094: Improve code generation in integer-gmp (thanks to Reid Barton) ​Phab:D1068: Implement support for the MO_U_Mul2 MachOp in the LLVM backend (thanks to Michael Terepeta) ​Phab:D524: Improve runtime system allocator performance with two-step allocation (thanks to Simon Marlow) That's all for this time. Enjoy your week! Cheers, Ben 2015-07-29T15:52:19Z bgamari

Another ounce of theory

Planet Haskell - 20 hours 32 min ago
A few months ago I wrote an article here called an ounce of theory is worth a pound of search and I have a nice followup. When I went looking for that article I couldn't find it, because I thought it was about how an ounce of search is worth a pound of theory, and that I was writing a counterexample. I am quite surprised to discover that that I have several times discussed how a little theory can replace a lot of searching, and not vice versa, but perhaps that is because the search is my default. Anyway, the question came up on math StackExchange today: John has 77 boxes each having dimensions 3×3×1. Is it possible for John to build one big box with dimensions 7×9×11? OP opined no, but had no argument. The first answer that appeared was somewhat elaborate and outlined a computer search strategy which claimed to reduce the search space to only 14,553 items. (I think the analysis is wrong, but I agree that the search space is not too large.) I almost wrote the search program. I have a program around that is something like what would be needed, although it is optimized to deal with a few oddly-shaped tiles instead of many similar tiles, and would need some work. Fortunately, I paused to think a little before diving in to the programming. Order How to Solve It with kickback no kickback For there is an easy answer. Suppose John solved the problem. Look at just one of the 7×11 faces of the big box. It is a 7×11 rectangle that is completely filled by 1×3 and 3×3 rectangles. But 7×11 is not a multiple of 3. So there can be no solution. Now how did I think of this? It was a very geometric line of reasoning. I imagined a 7×11×9 carton and imagined putting the small boxes into the carton. There can be no leftover space; every one of the 693 cells must be filled. So in particular, we must fill up the bottom 7×11 layer, and there is no point trying to fill any other layer until the bottom one is filled. So I started considering how to pack the bottommost 7×11×1 slice with just the bottom parts of the small boxes and quickly realized it couldn't be done; there is always an empty cell left over somewhere, usually in the corner. The argument about considering just one face of the large box came later; I decided it was clearer than what I actually came up with. I think this is a nice example of the Pólya strategy “solve a simpler problem” from How to Solve It, but I was not thinking of that specifically when I came up with the solution. For a more interesting problem of the same sort, suppose you have six 2×2x1 slabs. It is possible to pack them into a 3×3x3 box? (There will, of course, be some space left over.)

Darcs rebase by example

Planet Haskell - 20 hours 32 min ago
Darcs is a patch-centric version control system. In Darcs, there is no “correct” linear history of a repository – rather, there is a poset of patches. That means that most of the time you are pushing and pulling changes you … Continue reading →

The LambdaCube 3D Tour

Planet Haskell - 20 hours 32 min ago
Prologue This blog has been silent for a long time, so it’s definitely time for a little update. A lot of things happened since the last post! Our most visible achievement from this period is the opening of the official website for LambdaCube 3D at ‒ wait for it! ‒ lambdacube3d.com, which features an interactive editor among […]

Migrating a Project to stack

Planet Haskell - 20 hours 32 min ago
This post consists of notes on which steps I took to convert convert one of my Haskell projects to stack. It provides a small illustration of how flexible stack can be in accomodating project organisation quirks on the way towards predictable builds. If you want to see the complete results, here are links to the Bitbucket repository of Stunts Cartography, the example project I am using, and specifically to the source tree immediately after the migration. The first decision to make when migrating a project is which Stackage snapshot to pick. It had been a while since I last updated my project, and building it with the latest versions of all its dependencies would require a few adjustments. That being so, I chose to migrate to stack before any further patches. Since one of the main dependencies was diagrams 1.2, I went for lts-2.19, the most recent LTS snapshot with that version of diagrams 1. $ stack init --resolver lts-2.19 cabal init creates a stack.yaml file based on an existing cabal file in the current directory. The --resolver option can be used to pick a specific snapshot. One complicating factor in the conversion to stack was that two of the extra dependencies, threepenny-gui-0.5.0.0 (one major version behind the current one) and zip-conduit wouldn’t build with the LTS snapshot plus current Hackage without version bumps in their cabal files. Fortunately, stack deals very well with situations like this, in which minor changes to some dependency are needed. I simply forked the dependencies on GitHub, pushed the version bumps to my forks and referenced the commits in the remote GitHub repository in stack.yaml. A typical entry for a Git commit in the packages section looks like this: - location: git: https://github.com/duplode/zip-conduit commit: 1eefc8bd91d5f38b760bce1fb8dd16d6e05a671d extra-dep: true Keeping customised dependencies in public remote repositories is an excellent solution. It enables users to build the package without further intervention without requiring developers to clumsily bundle bundle the source tree of the dependencies with the project, or waiting for a pull request to be accepted upstream and reach Hackage. With the two tricky extra dependencies being offloaded to Git repositories, the next step was using stack solver to figure out the rest of them: $ stack solver --modify-stack-yaml This command is not guaranteed to give you a perfect build plan It's possible that even with the changes generated below, you will still need to do some manual tweaking Asking cabal to calculate a build plan, please wait extra-deps: - parsec-permutation-0.1.2.0 - websockets-snap-0.9.2.0 Updated /home/duplode/Development/stunts/diagrams/stack.yaml Here is the final stack.yaml: flags: stunts-cartography: repldump2carto: true packages: - '.' - location: git: https://github.com/duplode/zip-conduit commit: 1eefc8bd91d5f38b760bce1fb8dd16d6e05a671d extra-dep: true - location: git: https://github.com/duplode/threepenny-gui commit: 2dd88e893f09e8e31378f542a9cd253cc009a2c5 extra-dep: true extra-deps: - parsec-permutation-0.1.2.0 - websockets-snap-0.9.2.0 resolver: lts-2.19 repldump2carto is a flag defined in the cabal file. It is used to build a secondary executable. Beyond demonstrating how the flags section of stack.yaml works, I added it because stack ghci expects all possible build targets to have been built 2. As I have GHC 7.10.1 from my Linux distribution and the LTS 2.19 snapshot is made for GHC 7.8.4, I needed stack setup as an additional step. That command locally installs (in ~/.stack) the GHC version required by the chosen snapshot. That pretty much concludes the migration. All that is left is demonstrating: stack build to compile the project… $ stack build JuicyPixels-3.2.5.2: configure Boolean-0.2.3: download # etc. (Note how deps from Git are handled seamlessly.) threepenny-gui-0.5.0.0: configure threepenny-gui-0.5.0.0: build threepenny-gui-0.5.0.0: install zip-conduit-0.2.2.2: configure zip-conduit-0.2.2.2: build zip-conduit-0.2.2.2: install # etc. stunts-cartography-0.4.0.3: configure stunts-cartography-0.4.0.3: build stunts-cartography-0.4.0.3: install Completed all 64 actions. … stack ghci to play with it in GHCi… $ stack ghci Configuring GHCi with the following packages: stunts-cartography GHCi, version 7.8.4: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. -- etc. Ok, modules loaded: GameState, Annotation, Types.Diagrams, Pics, Pics.MM, Annotation.Flipbook, Annotation.LapTrace, Annotation.LapTrace.Vec, Annotation.LapTrace.Parser.Simple, Annotation.Parser, Types.CartoM, Parameters, Composition, Track, Util.Misc, Pics.Palette, Output, Util.ByteString, Util.ZipConduit, Replay, Paths, Util.Reactive.Threepenny, Util.Threepenny.Alertify, Widgets.BoundedInput. *GameState> :l src/Viewer.hs -- The Main module. -- etc. *Main> :main Welcome to Stunts Cartography. Open your web browser and navigate to localhost:10000 to begin. Listening on http://127.0.0.1:10000/ [27/Jul/2015:00:55:11 -0300] Server.httpServe: START, binding to [http://127.0.0.1:10000/] … and looking at the build output in the depths of .stack-work: $ .stack-work/dist/x86_64-linux/Cabal-1.18.1.5/build/sc-trk-viewer/sc-trk-viewer Welcome to Stunts Cartography 0.4.0.3. Open your web browser and navigate to localhost:10000 to begin. Listening on http://127.0.0.1:10000/ [26/Jul/2015:20:02:54 -0300] Server.httpServe: START, binding to [http://127.0.0.1:10000/] With the upcoming stack 0.2 it will be possible to use stack build --copy-bins --local-bin-path to copy any executables built as part of the project to a path. If the --local-bin-path option is omitted, the default is ~/.local/bin. (In fact, you can already copy executables to ~/.local/bin with stack 0.1.2 through stack install. However, I don’t want to overemphasise that command, as stack install not being equivalent to cabal install can cause some confusion.) Hopefully this report will give you an idea of what to expect when migrating your projects to stack. Some details may appear a little strange, given how familiar cabal-install workflows are, and some features are still being shaped. All in all, however, stack works very well already: it definitely makes setting up reliable builds easier. The stack repository at GitHub, and specially the wiki therein, offers lots of helpful information, in case you need further details and usage tips. As a broader point, it just seems polite to, when possible, pick a LTS snapshot over than a nightly for a public project. It is more likely that those interested in building your project already have a specific LTS rather than an arbitrary nightly.↩ That being so, a more natural arrangement would be treating repldump2carto as a full-blown subproject by giving it its own cabal file and adding it to the packages section. I would then be able to load only the main project in GHCi with stack ghci stunts-cartography.↩ Comment on GitHub (see the full post for a reddit link) Post licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. 2015-07-27T06:00:00Z Daniel Mlot

OCaml server-side developer at Ahrefs Research (Full-time)

Planet Haskell - 20 hours 32 min ago
Who we are Ahrefs Research is a San Francisco branch of Ahrefs Pte Ltd (Singapore), which runs an internet-scale bot that crawls whole Web 24/7, storing huge volumes of information to be indexed and structured in timely fashion. On top of that Ahrefs is building analytical services for end-users. Ahrefs Research develops a custom petabyte-scale distributed storage to accommodate all that data coming in at high speed, focusing on performance, robustness and ease of use. Performance-critical low-level part is implemented in C++ on top of a distributed filesystem, while all the coordination logic and communication layer, along with API library exposed to the developer is in OCaml. We are a small team and strongly believe in better technology leading to better solutions for real-world problems. We worship functional languages and static typing, extensively employ code generation and meta-programming, value code clarity and predictability, constantly seek out to automate repetitive tasks and eliminate boilerplate, guided by DRY and following KISS. If there is any new technology that will make our life easier - no doubt, we'll give it a try. We rely heavily on opensource code (as the only viable way to build maintainable system) and contribute back, see e.g. https://github.com/ahrefs . It goes without saying that our team is all passionate and experienced OCaml programmers, ready to lend a hand or explain that intricate ocamlbuild rule. Our motto is "first do it, then do it right, then do it better". What we need Ahrefs Research is looking for backend developer with deep understanding of operating systems, networks and taste for simple and efficient architectural designs. Our backend is implemented mostly in OCaml and some C++, as such proficiency in OCaml is very much appreciated, otherwise a strong inclination to intensively learn OCaml in a short term will be required. Understanding of functional programming in general and/or experience with other FP languages (F#,Haskell,Scala,Scheme,etc) will help a lot. Knowledge of C++ is a plus. The candidate will have to deal with the following technologies on the daily basis: networks & distributed systems 4+ petabyte of live data OCaml C++ linux git The ideal candidate is expected to: Independently deal with and investigate bugs, schedule tasks and dig code Make argumented technical choice and take responsibility for it Understand the whole technology stack at all levels : from network and userspace code to OS internals and hardware Handle full development cycle of a single component, i.e. formalize task, write code and tests, setup and support production (devops) Approach problems with practical mindset and suppress perfectionism when time is a priority These requirements stem naturally from our approach to development with fast feedback cycle, highly-focused personal areas of responsibility and strong tendency to vertical component splitting. What you get We provide: Competitive salary Modern office in San Francisco SOMA (Embarcadero) Informal and thriving atmosphere First-class workplace equipment (hardware, tools) No dress code Get information on how to apply for this position. 2015-07-20T08:07:26Z

GCD and Parallel Collections in Swift

Planet Haskell - 20 hours 32 min ago
One of the benefits of functional programming is that it's straightforward to parallelize operations. Common FP idioms like map, filter and reduce can be adapted so they run on many cores at once, letting you get instant parallelization wherever you find a bottleneck.The benefits of these parallel combinators are huge. Wherever you find a bottleneck in your program, you can simply replace your call to map with a call to a parallel map and your code will be able to take advantage of all the cores on your system. On my eight-core system, for example, simply using a different function can theoretically yield an eight-fold speed boost. (Of course, there are a few reasons you might not see that theoretical speed improvement: namely, the overhead of creating threads, splitting up the work, synchronizing data between the threads, etc. Nevertheless, if you profile your code and focus on hotspots, you can see tremendous improvements with simple changes.)Swift doesn't yet come with parallel collections functions, but we can build them ourselves, using Grand Central Dispatch:// requires Swift 2.0 or higherextension Array {    public func pmap(transform: (Element -> T)) -> [T] {        guard !self.isEmpty else {            return []        }                var result: [(Int, [T])] = []                let group = dispatch_group_create()        let lock = dispatch_queue_create("pmap queue for result", DISPATCH_QUEUE_SERIAL)                let step: Int = max(1, self.count / NSProcessInfo.processInfo().activeProcessorCount) // step can never be 0                for var stepIndex = 0; stepIndex * step < self.count; stepIndex++ {            let capturedStepIndex = stepIndex            var stepResult: [T] = []            dispatch_group_async(group, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {                for i in (capturedStepIndex * step)..<((capturedStepIndex + 1) * step) {                    if i < self.count {                        let mappedElement = transform(self[i])                        stepResult += [mappedElement]                    }                }                dispatch_group_async(group, lock) {                    result += [(capturedStepIndex, stepResult)]                }            }        }                dispatch_group_wait(group, DISPATCH_TIME_FOREVER)                return result.sort { $0.0 < $1.0 }.flatMap { $0.1 }   }}pmap takes the same arguments as map but simply runs the function across all of your system's CPUs. Let's break the function down, step by step.In the case of an empty array, pmap returns early, since the overhead of splitting up the work and synchronizing the results is non-trivial. We might take this even further by falling back to a sequential map for arrays with a very small element count.Create a Grand Central Dispatch group that we can associate with the GCD blocks we'll run later on. Since all of these blocks will be in the same group, the invoking thread can wait for the group to be empty at the end of the function and know for certain that all of the background work has finished.Create a dedicated, sequential lock queue to control access to the result array. This is a common pattern in GCD: simulating a mutex with a sequential queue. Since a sequential queue will never run two blocks simultaneously, we can be sure that whatever operations we perform in this queue will be isolated from one another.Next, pmap breaks the array up into "steps", based on the host machine's CPU count (since this is read from NSProcessInfo, this function will automatically receive performance benefits when run on machines with more cores). Each step is dispatched to one of GCD's global background queues. In the invoking thread, this for loop will run very, very quickly, since all it does is add closures to background queues.The main for loop iterates through each "step," capturing the stepIndex in a local variable. If we don't do this, the closures passed to dispatch_group_async will all refer to the same storage location - as the for loop increments, all of the workers will see stepIndex increase by one and will all operate on the same step. By capturing the variable, each worker has its own copy of stepIndex, which never changes as the for loop proceeds.We calculate the start and end indices for this step. For each array element in that range, we call transform on the element and add it to this worker's local stepResult array. Because it's unlikely that the number of elements in the array will be divisible by a given machine's processor count, we check that i never goes beyond the end of the array, which could otherwise happen in the very last step.After an entire step has been processed, we add this worker's results to the master result array. Since the order in which workers will finish is nondeterministic, each element of the result array is a tuple containing the stepIndex and the transformed elements in that step's range. We use the lock queue to ensure that all changes to the result array are synchronized.  Note that we only have to enter this critical section once for each core - an alternative implementation of pmap might create a single master result array of the same size as the input and set each element to its mapped result as it goes. But this would have to enter the critical section once for every array element, instead of just once for each CPU, generating more memory and processor contention and benefiting less from spatial locality. We use dispatch_sync instead of dispatch_async because we want to be sure that the worker's changes have been applied to the masterResults array before declaring this worker to be done. If we were to use dispatch_async, the scheduler could very easily finish all of the step blocks but leave one or more of these critical section blocks unprocessed, leaving us with an incomplete result.On the original thread, we call dispatch_group_wait, which waits until all blocks in the group have completed. At this point, we know that all work has been done and all changes to the master results array have been made.The final line sorts the master array by stepIndex (since steps finish in a nondeterministic order) and then flattens the master array in that order.To see how this works, let's create a simple profile function:func profile(desc: String, block: () -> A) -> Void {    let start = NSDate().timeIntervalSince1970    block()        let duration = NSDate().timeIntervalSince1970 - start    print("Profiler: completed \(desc) in \(duration * 1000)ms")}We'll test this out using a simple function called slowCalc, which adds a small sleep delay before each calculation, to ensure that each map operation does enough work (in production code, you should never sleep in code submitted to a GCD queue - this is purely for demonstration purposes). Without this little delay, the overhead of parallelization would be too great:func slowCalc(x: Int) -> Int {    NSThread.sleepForTimeInterval(0.1)    return x * 2}let smallTestData: [Int] = [Int](0..<10)let largeTestData = [Int](0..<300)profile("large dataset (sequential)") { largeTestData.map { slowCalc($0) } }profile("large dataset (parallel)") { largeTestData.pmap { slowCalc($0) } }On my eight-core machine, this results in:Profiler: completed large dataset (sequential) in 31239.7990226746msProfiler: completed large dataset (parallel) in 4005.04493713379msan 7.8-fold increase, which is about what you'd expect.It's important thing to remember that if each iteration doesn't do enough work, the overhead of splitting up work, setting up worker blocks and synchronizing data access will far outweigh the time savings of parallelization. The amount of overhead involved can be surprising. This code is identical to the above, except that it doesn't add the extra delay.profile("large dataset (sequential, no delay)") { largeTestData.map { $0 * 2 } }profile("large dataset (parallel, no delay)") { largeTestData.pmap { $0 * 2 } }On my machine, it results in:Profiler: completed large dataset (sequential, no delay) in 53.4629821777344msProfiler: completed large dataset (parallel, no delay) in 161.548852920532msThe parallel version is three times slower than the sequential version! This is a really important consideration when using parallel collection functions:Make sure that each of your iterations does enough work to make parallelization worth it.Parallel collections are not a panacea - you can't just sprinkle them throughout your code and assume you'll get a performance boost. You still need to profile for hotspots, and it's important to focus on bottlenecks found through profiling, rather than hunches about what parts of your code are slowest.Modern CPUs are blindingly fast - an addition or multiplication operation is so fast that it's not worth parallelizing these, unless your array is very large.You can use the same techniques to implement a parallel filter function:// requires Swift 2.0 or higherextension Array {    public func pfilter(includeElement: Element -> Bool) -> [Element] {        guard !self.isEmpty else {            return []        }                var result: [(Int, [Element])] = []                let group = dispatch_group_create()        let lock = dispatch_queue_create("pmap queue for result", DISPATCH_QUEUE_SERIAL)                let step: Int = max(1, self.count / NSProcessInfo.processInfo().activeProcessorCount) // step can never be 0                for var stepIndex = 0; stepIndex * step < self.count; stepIndex++ {            let capturedStepIndex = stepIndex                        var stepResult: [Element] = []            dispatch_group_async(group, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {                for i in (capturedStepIndex * step)..<((capturedStepIndex + 1) * step) {                    if i < self.count && includeElement(self[i]) {                        stepResult += [self[i]]                    }                }                                dispatch_group_async(group, lock) {                    result += [(capturedStepIndex, stepResult)]                }            }        }                dispatch_group_wait(group, DISPATCH_TIME_FOREVER)                return result.sort { $0.0 < $1.0 }.flatMap { $0.1 }    }}This code is almost exactly identical to pmap - only the logic in the inner for loop is different.We can now start using these combinators together (again, we have to use a slowed-down predicate function in order to see the benefit from parallelization):func slowTest(x: Int) -> Bool {    NSThread.sleepForTimeInterval(0.1)    return x % 2 == 0}profile("large dataset (sequential)") { largeTestData.filter { slowTest($0) }.map { slowCalc($0) } }profile("large dataset (sequential filter, parallel map)") { largeTestData.filter { slowTest($0) }.pmap { slowCalc($0) } }profile("large dataset (parallel filter, sequential map)") { largeTestData.pfilter { slowTest($0) }.map { slowCalc($0) } }profile("large dataset (parallel filter, parallel map)") { largeTestData.pfilter { slowTest($0) }.pmap { slowCalc($0) } }which results in:Profiler: completed large dataset (sequential) in 1572.28803634644msProfiler: completed large dataset (sequential filter, parallel map) in 1153.90300750732msProfiler: completed large dataset (parallel filter, sequential map) in 642.061948776245msProfiler: completed large dataset (parallel filter, parallel map) in 231.456995010376msUsing one parallel combinator gives a slight improvement; combining the two parallel operations gives us a fivefold performance improvement over the basic sequential implementation.Here are some other directions to pursue:Implement parallel versions of find, any/exists and all. These are tricky because their contracts stipulate that processing stops as soon as they have a result. So you'll have to find some way to stop your parallel workers as soon as the function has its answer.Implement a parallel version of reduce. The benefit of doing this is that reduce is a "primitive" higher-order function - you can easily implement pmap and pfilter given a parallel reduce function.Generalize these functions to work on all collections (not just arrays), using Swift 2's protocol extensions.

[hqoeierf] Planck frequency pitch standard

Planet Haskell - 20 hours 32 min ago
We present a fanciful alternative to the musical pitch standard A440 by having some piano key note, not necessarily A4, have a frequency that is an integer perfect power multiple of the Planck time interval.Let Pf = Planck frequency = 1/plancktime = 1/(5.3910604e-44 s) = 1.8549226e+43 Hz.We first consider some possibilities of modifying A = 440 Hz as little as possible.  Sharpness or flatness is given in cents, where 100 cents = 1 semitone.F3 = Pf / 725926^7 = 174.6141 Hz, or A = 440.0000 Hz, offset = -0.00003 cents G3 = Pf / 714044^7 = 195.9977 Hz, or A = 440.0000 Hz, offset = -0.00013 cents E3 = Pf / 135337^8 = 164.8137 Hz, or A = 439.9999 Hz, offset = -0.00030 cents G3 = Pf / 132437^8 = 195.9978 Hz, or A = 440.0001 Hz, offset = 0.00045 cents D#5 = Pf / 31416^9 = 622.2542 Hz, or A = 440.0001 Hz, offset = 0.00053 cents A#3 = Pf / 12305^10 = 233.0825 Hz, or A = 440.0011 Hz, offset = 0.00442 cents C#5 = Pf / 1310^13 = 554.3690 Hz, or A = 440.0030 Hz, offset = 0.01176 cents A#3 = Pf / 360^16 = 233.0697 Hz, or A = 439.9770 Hz, offset = -0.09058 cents A#1 = Pf / 77^22 = 58.2814 Hz, or A = 440.0824 Hz, offset = 0.32419 cents D#4 = Pf / 50^24 = 311.2044 Hz, or A = 440.1095 Hz, offset = 0.43060 cents E1 = Pf / 40^26 = 41.1876 Hz, or A = 439.8303 Hz, offset = -0.66769 cents B5 = Pf / 22^30 = 990.0232 Hz, or A = 441.0052 Hz, offset = 3.95060 cents F#3 = Pf / 10^41 = 185.4923 Hz, or A = 441.1774 Hz, offset = 4.62660 cents A7 = Pf / 7^47 = 3537.6749 Hz, or A = 442.2094 Hz, offset = 8.67126 cents G6 = Pf / 3^84 = 1549.3174 Hz, or A = 434.7625 Hz, offset = -20.73121 cents G#7 = Pf / 2^132 = 3406.9548 Hz, or A = 451.1929 Hz, offset = 43.48887 centsNext some modifications of other pitch standards, used by continental European orchestras.Modifications of A = 441 Hz:C#6 = Pf / 106614^8 = 1111.2503 Hz, or A = 441.0000 Hz, offset = -0.00007 cents F2 = Pf / 39067^9 = 87.5055 Hz, or A = 441.0000 Hz, offset = -0.00011 cents G#2 = Pf / 38322^9 = 104.0620 Hz, or A = 440.9995 Hz, offset = -0.00184 cents G1 = Pf / 6022^11 = 49.1109 Hz, or A = 441.0006 Hz, offset = 0.00240 cents B5 = Pf / 22^30 = 990.0232 Hz, or A = 441.0052 Hz, offset = 0.02044 cents F#3 = Pf / 10^41 = 185.4923 Hz, or A = 441.1774 Hz, offset = 0.69644 cents A7 = Pf / 7^47 = 3537.6749 Hz, or A = 442.2094 Hz, offset = 4.74110 cents E7 = Pf / 5^57 = 2673.2253 Hz, or A = 446.0410 Hz, offset = 19.67702 cents G6 = Pf / 3^84 = 1549.3174 Hz, or A = 434.7625 Hz, offset = -24.66137 cents G#7 = Pf / 2^132 = 3406.9548 Hz, or A = 451.1929 Hz, offset = 39.55871 centsModifications of A = 442 Hz:D#6 = Pf / 547981^7 = 1250.1649 Hz, or A = 442.0000 Hz, offset = 0.00014 cents G6 = Pf / 530189^7 = 1575.1097 Hz, or A = 442.0002 Hz, offset = 0.00086 cents G#6 = Pf / 525832^7 = 1668.7709 Hz, or A = 442.0003 Hz, offset = 0.00116 cents F#4 = Pf / 122256^8 = 371.6759 Hz, or A = 441.9996 Hz, offset = -0.00170 cents A5 = Pf / 30214^9 = 883.9990 Hz, or A = 441.9995 Hz, offset = -0.00194 cents F#4 = Pf / 11744^10 = 371.6767 Hz, or A = 442.0006 Hz, offset = 0.00242 cents A7 = Pf / 217^17 = 3535.9843 Hz, or A = 441.9980 Hz, offset = -0.00769 cents D2 = Pf / 151^19 = 73.7503 Hz, or A = 442.0024 Hz, offset = 0.00939 cents A2 = Pf / 62^23 = 110.4885 Hz, or A = 441.9539 Hz, offset = -0.18072 cents D#3 = Pf / 38^26 = 156.2976 Hz, or A = 442.0764 Hz, offset = 0.29903 cents D#4 = Pf / 37^26 = 312.6662 Hz, or A = 442.1768 Hz, offset = 0.69244 cents A7 = Pf / 7^47 = 3537.6749 Hz, or A = 442.2094 Hz, offset = 0.81985 cents E7 = Pf / 5^57 = 2673.2253 Hz, or A = 446.0410 Hz, offset = 15.75576 cents G6 = Pf / 3^84 = 1549.3174 Hz, or A = 434.7625 Hz, offset = -28.58262 cents G#7 = Pf / 2^132 = 3406.9548 Hz, or A = 451.1929 Hz, offset = 35.63745 centsModifications of A = 443 Hz:F#5 = Pf / 590036^7 = 745.0342 Hz, or A = 443.0000 Hz, offset = 0.00003 cents C7 = Pf / 508595^7 = 2107.2749 Hz, or A = 443.0000 Hz, offset = -0.00007 cents F7 = Pf / 488038^7 = 2812.8743 Hz, or A = 442.9999 Hz, offset = -0.00020 cents B2 = Pf / 140193^8 = 124.3126 Hz, or A = 442.9998 Hz, offset = -0.00093 cents A5 = Pf / 109676^8 = 885.9985 Hz, or A = 442.9992 Hz, offset = -0.00296 cents B7 = Pf / 25564^9 = 3978.0160 Hz, or A = 443.0012 Hz, offset = 0.00456 cents G#1 = Pf / 5988^11 = 52.2668 Hz, or A = 442.9982 Hz, offset = -0.00722 cents B1 = Pf / 391^16 = 62.1581 Hz, or A = 443.0125 Hz, offset = 0.04895 cents A6 = Pf / 226^17 = 1772.0760 Hz, or A = 443.0190 Hz, offset = 0.07422 cents F7 = Pf / 163^18 = 2811.5701 Hz, or A = 442.7946 Hz, offset = -0.80308 cents A#3 = Pf / 60^23 = 234.8805 Hz, or A = 443.3954 Hz, offset = 1.54462 cents E6 = Pf / 35^26 = 1326.0401 Hz, or A = 442.5128 Hz, offset = -1.90507 cents E2 = Pf / 34^27 = 82.8696 Hz, or A = 442.4704 Hz, offset = -2.07100 cents C#2 = Pf / 18^33 = 69.8768 Hz, or A = 443.6902 Hz, offset = 2.69500 cents A7 = Pf / 7^47 = 3537.6749 Hz, or A = 442.2094 Hz, offset = -3.09255 cents E7 = Pf / 5^57 = 2673.2253 Hz, or A = 446.0410 Hz, offset = 11.84337 cents G#7 = Pf / 2^132 = 3406.9548 Hz, or A = 451.1929 Hz, offset = 31.72506 centsPlanck time is not known to high precision due to uncertainty of the gravitational constant G.  Fortunately coincidentally, musical instruments are not tuned to greater than 7 significant digits of precision, either.Source code in Haskell. The algorithm is not clever; it simply brute forces every perfect integer power multiple of Planck time, with base less than 1 million, and within the range of an 88-key piano.  The code also can base the fundamental frequency off the hydrogen 21 cm line or off the frequency of cesium used for atomic clocks.Inspired by Scientific pitch, which set C4 = 2^8 Hz = 256 Hz, or A = 430.538964609902 Hz, offset = -37.631656229590796 cents.

Better Yaml Parsing

Planet Haskell - 20 hours 32 min ago
Michael Snoyman’s yaml package reuses aeson’s interface (the Value type and ToJSON & FromJSON classes) to specify how datatypes should be serialized and deserialized. It’s not a secret that aeson’s primary goal is raw performance. This goal may be at odds with the goal of YAML: being human readable and writable. In this article, I’ll explain how a better way of parsing human-written YAML may work. The second direction – serializing to YAML – also needs attention, but I’ll leave it out for now. Example: Item To demonstrate where the approach taken by the yaml package is lacking, I’ll use the following running example. {-# LANGUAGE OverloadedStrings #-} import Data.Aeson (FromJSON(..), withObject, withText, (.:), (.:?), (.!=)) import Data.Yaml (decodeEither) import Data.Text (Text) import Control.Applicative data Item = Item Text -- title Int -- quantity deriving Show The fully-specified Item in YAML may look like this: title: Shampoo quantity: 100 In our application, most of the time the quantity will be 1, so we’ll allow two alternative simplified forms. In the first one, the quantity field is omitted and defaulted to 1: title: Shampoo In the second form, the object will be flattened to a bare string: Shampoo Here’s a reasonably idiomatic way to write an aeson parser for this format: defaultQuantity :: Int defaultQuantity = 1 instance FromJSON Item where parseJSON v = parseObject v <|> parseString v where parseObject = withObject "object" $ \o -> Item <$> o .: "title" <*> o .:? "quantity" .!= defaultQuantity parseString = withText "string" $ \t -> return $ Item t defaultQuantity Shortcomings of FromJSON The main requirement for a format written by humans is error detection and reporting. Let’s see how the parser we’ve defined copes with humanly errors. > decodeEither "{title: Shampoo, quanity: 2}" :: Either String Item Right (Item "Shampoo" 1) Unexpected result, isn’t it? If you look closer, you’ll notice that the word quantity is misspelled. But our parser didn’t have any problem with that. Such a typo may go unnoticed for a long time and quitely affect how your application works. For another example, let’s say I am a returning user who vaguely remembers the YAML format for Items. I might have written something like *Main Data.ByteString.Char8> decodeEither "{name: Shampoo, quanity: 2}" :: Either String Item Left "when expecting a string, encountered Object instead" “That’s weird. I could swear this app accepted some form of an object where you could specify the quantity. But apparently I’m wrong, it only accepts simple strings.” How to fix it Check for unrecognized fields To address the first problem, we need to know the set of acceptable keys. This set is impossible to extract from a FromJSON parser, because it is buried inside an opaque function. Let’s change parseJSON to have type FieldParser a, where FieldParser is an applicative functor that we’ll define shortly. The values of FieldParser can be constructed with combinators: field :: Text -- ^ field name -> Parser a -- ^ value parser -> FieldParser a optField :: Text -- ^ field name -> Parser a -- ^ value parser -> FieldParser (Maybe a) The combinators are analogous to the ones I described in JSON validation combinators. So how do we implement FieldParser? One (“initial”) way is to use a free applicative functor and later interpret it in two ways: as a FromJSON-like parser and as a set of valid keys. But there’s another (“final”) way which is to compose the applicative functor from components, one per required semantics. The semantics of FromJSON is given by ReaderT Object (Either ParseError). The semantics of a set of valid keys is given by Constant (HashMap Text ()). We take the product of these semantics to get the implementation of FieldParser: newtype FieldParser a = FieldParser (Product (ReaderT Object (Either ParseError)) (Constant (HashMap Text ())) a) Notice how I used HashMap Text () instead of HashSet Text? This is a trick to be able to subtract this from the object (represented as HashMap Text Value) later. Another benefit of this change is that it’s no longer necessary to give a name to the object (often called o), which I’ve always found awkward. Improve error messages Aeson’s approach to error messages is straightforward: it tries every alternative in turn and, if none succeeds, it returns the last error message. There are two approaches to get a more sophisticated error reporting: Collect errors from all alternatives and somehow merge them. Each error would carry its level of “matching”. An alternative that matched the object but failed at key lookup matches better than the one that expected a string instead of an object. Thus the error from the first alternative would prevail. If there are multiple errors on the same level, we should try to merge them. For instance, if we expect an object or a string but got an array, then the error message should mention both object and string as valid options. Limited backtracking. This is what Parsec does. In our example, when it was determined that the object was “at least somewhat” matched by the first alternative, the second one would have been abandoned. This approach is rather restrictive: if you have two alternatives each expecting an object, the second one will never fire. The benefit of this approach is its efficiency (sometimes real, sometimes imaginary), since we never explore more than one alternative deeply. It turns out, when parsing Values, we can remove some of the backtracking without imposing any restrictions. This is because we can “factor out” common parser prefixes. If we have two parsers that expect an object, this is equivalent to having a single parser expecting an object. To see this, let’s represent a parser as a record with a field per JSON “type”: data Parser a = Parser { parseString :: Maybe (Text -> Either ParseError a) , parseArray :: Maybe (Vector Value -> Either ParseError a) , parseObject :: Maybe (HashMap Text Value -> Either ParseError a) ... } Writing a function Parser a -> Parser a -> Parser a which merges individual fields is then a simple exercise. Why is every field wrapped in Maybe? How’s Nothing different from Just $ const $ Left "..."? This is so that we can see which JSON types are valid and give a better error message. If we tried to parse a JSON number as an Item, the error message would say that it expected an object or a string, because only those fields of the parser would be Just values. Implementation As you might notice, the Parser type above can be mechanically derived from the Value datatype itself. In my actual implementation, I use generics-sop with great success to reduce the boilerplate. To give you an idea, here’s the real definition of the Parser type: newtype ParserComponent a fs = ParserComponent (Maybe (NP I fs -> Either ParseError a)) newtype Parser a = Parser (NP (ParserComponent a) (Code Value)) We can then apply a Parser to a Value using this function. I’ve implemented this YAML parsing layer for our needs at Signal Vine. We are happy to share the code, in case someone is interested in maintaining this as an open source project. 2015-07-26T20:00:00Zhttp://ro-che.info/articles/2015-07-26-better-yaml-parsing

Introducing Luminance, a safer OpenGL API

Planet Haskell - 20 hours 32 min ago
A few weeks ago, I was writing Haskell lines for a project I had been working on for a very long time. That project was a 3D engine. There are several posts about it on my blog, feel free to check out.The thing is… Times change. The more it passes, the more I become mature in what I do in the Haskell community. I’m a demoscener, and I need to be productive. Writing a whole 3D engine for such a purpose is a good thing, but I was going round and round in circles, changing the whole architecture every now and then. I couldn’t make my mind and help it. So I decided to stop working on that, and move on.If you are a Haskell developer, you might already know Edward Kmett. Each talk with him is always interesting and I always end up with new ideas and new knowledge. Sometimes, we talk about graphics, and sometimes, he tells me that writing a 3D engine from scratch and release it to the community is not a very good move.I’ve been thinking about that, and in the end, I agree with Edward. There’re two reasons making such a project hard and not interesting for a community:a good “3D engine” is a specialized one – for FPS games, for simulations, for sport games, for animation, etc. If we know what the player will do, we can optimize a lot of stuff, and put less details into not-important part of the visuals. For instance, some games don’t really care about skies, so they can use simple skyboxes with nice textures to bring a nice touch of atmosphere, without destroying performance. In a game like a flight simulator, skyboxes have to be avoided to go with other techniques to provide a correct experience to players. Even though an engine could provide both techniques, apply that problem to almost everything – i.e. space partitionning for instance – and you end up with a nightmare to code ;an engine can be a very bloated piece of software – because of point 1. It’s very hard to keep an engine up to date regarding technologies, and make every one happy, especially if the engine targets a large audience of people – i.e. hackage.Point 2 might be strange to you, but that’s often the case. Building a flexible 3D engine is a very hard and non-trivial task. Because of point 1, you utterly need to restrict things in order to get the required level of performance or design. There are people out there – especially in the demoscene world – who can build up 3D engines quickly. But keep in mind those engines are limited to demoscene applications, and enhancing them to support something else is not a trivial task. In the end, you might end up with a lot of bloated code you’ll eventually zap later on to build something different for another purpose – eh, demoscene is about going dirty, right?! ;)BasicsSo… Let’s go back to the basics. In order to include everyone, we need to provide something that everyone can download, install, learn and use. Something like OpenGL. For Haskell, I highly recommend using gl. It’s built against the gl.xml file – released by Khronos. If you need sound, you can use the complementary library I wrote, using the same name convention, al.The problem with that is the fact that OpenGL is a low-level API. Especially for new comers or people who need to get things done quickly. The part that bothers – wait, no, annoys – me the most is the fact that OpenGL is a very old library which was designed two decades ago. And we suffer from that. A lot.OpenGL is a stateful graphics library. That means it maintains a state, a context, in order to work properly. Maintaining a context or state is a legit need, don’t get it twisted. However, if the design of the API doesn’t fit such a way of dealing with the state, we come accross a lot of problems. Is there one programmer who hasn’t experienced black screens yet? I don’t think so.The OpenGL’s API exposes a lot of functions that perform side-effects. Because OpenGL is weakly typed – almost all objects you can create in OpenGL share the same GL(u)int type, which is very wrong – you might end up doing nasty things. Worse, it uses an internal binding system to select the objects you want to operate on. For instance, if you want to upload data to a texture object, you need to bind the texture before calling the texture upload function. If you don’t, well, that’s bad for you. There’s no way to verify code safety at compile-time.You’re not convinced yet? OpenGL doesn’t tell you directly how to change things on the GPU side. For instance, do you think you have to bind your vertex buffer before performing a render, or is it sufficient to bind the vertex array object only? All those questions don’t have direct answers, and you’ll need to dig in several wikis and forums to get your answers – the answer to that question is “Just bind the VAO, pal.”What can we do about it?Several attempts to enhance that safety have come up. The first thing we have to do is to wrap all OpenGL object types into proper types. For instance, we need several types for Texture and Framebuffer.Then, we need a way to ensure that we cannot call a function if the context is not setup for. There are a few ways to do that. For instance, indexed monads can be a good start. However, I tried that, and I can tell you it’s way too complicated. You end up with very long types that make things barely unreadable. See this and this for excerpts.LuminanceIn my desperate quest of providing a safer OpenGL’s API, I decided to create a library from scratch called luminance. That library is not really an OpenGL safe wrapper, but it’s very close to being that.luminance provides the same objects than OpenGL does, but via a safer way to create, access and use them. It’s an effort for providing safe abstractions without destroying performance down and suited for graphics applications. It’s not a 3D engine. It’s a rendering framework. There’s no light, asset managers or that kind of features. It’s just a tiny and simple powerful API.Exampleluminance is still a huge work in progress. However, I can already show an example. The following example opens a window but doesn’t render anything. Instead, it creates a buffer on the GPU and perform several simple operations onto it.-- Several imports.import Control.Monad.IO.Class ( MonadIO(..) )import Control.Monad.Trans.Resource -- from the resourcet packageimport Data.Foldable ( traverse_ )import Graphics.Luminance.Bufferimport Graphics.Luminance.RWimport Graphics.UI.GLFW -- from the GLFW-b packageimport Prelude hiding ( init ) -- clash with GLFW-b’s init functionwindowW,windowH :: IntwindowW = 800windowH = 600windowTitle :: StringwindowTitle = "Test"main :: IO ()main = do init -- Initiate the OpenGL context with GLFW. windowHint (WindowHint'Resizable False) windowHint (WindowHint'ContextVersionMajor 3) windowHint (WindowHint'ContextVersionMinor 3) windowHint (WindowHint'OpenGLForwardCompat False) windowHint (WindowHint'OpenGLProfile OpenGLProfile'Core) window <- createWindow windowW windowH windowTitle Nothing Nothing makeContextCurrent window -- Run our application, which needs a (MonadIO m,MonadResource m) => m -- we traverse_ so that we just terminate if we’ve failed to create the -- window. traverse_ (runResourceT . app) window terminate-- GPU regions. For this example, we’ll just create two regions. One of floats-- and the other of ints. We’re using read/write (RW) regions so that we can-- send values to the GPU and read them back.data MyRegions = MyRegions { floats :: Region RW Float , ints :: Region RW Int }-- Our logic.app :: (MonadIO m,MonadResource m) => Window -> m ()app window = do -- We create a new buffer on the GPU, getting back regions of typed data -- inside of it. For that purpose, we provide a monadic type used to build -- regions through the 'newRegion' function. region <- createBuffer $ MyRegions <$> newRegion 10 <*> newRegion 5 clear (floats region) pi -- clear the floats region with pi clear (ints region) 10 -- clear the ints region with 10 readWhole (floats region) >>= liftIO . print -- print the floats as an array readWhole (ints region) >>= liftIO . print -- print the ints as an array floats region `writeAt` 7 $ 42 -- write 42 at index=7 in the floats region floats region @? 7 >>= traverse_ (liftIO . print) -- safe getter (Maybe) floats region @! 7 >>= liftIO . print -- unsafe getter readWhole (floats region) >>= liftIO . print -- print the floats as an arrayThose read/write regions could also have been made read-only or write-only. For such regions, some functions can’t be called, and trying to do so will make your compiler angry and throw errors at you.Up to now, the buffers are created persistently and coherently. That might cause issues with OpenGL synchronization, but I’ll wait for benchmarks before changing that part. If benchmarking spots performance bottlenecks, I’ll introduce more buffers and regions to deal with special cases.luminance doesn’t force you to use a specific windowing library. You can then embed it into any kind of host libraries.What’s to come?luminance is very young. At the moment of writing this article, it’s only 26 commits old. I just wanted to present it so that people know it exists will be released as soon as possible. The idea is to provide a library that, if you use it, won’t create black screens because of framebuffers incorrectness or buffers issues. It’ll ease debugging OpenGL applications and prevent from making nasty mistakes.I’ll keep posting about luminance as I get new features implemented.As always, keep the vibe, and happy hacking!

Mystery of the misaligned lowercase ‘p’

Planet Haskell - 20 hours 32 min ago
I've seen this ad on the subway at least a hundred times, but I never noticed this oddity before: Specifically, check out the vertical alignment of those ‘p’s: Notice that it is not simply an unusual font. The height of the ‘p’ matches the other lowercase letters exactly. Here's how it ought to look: At first I thought the designer was going for a playful, informal logotype. Some of the other lawyers who advertise in the subway go for a playful, informal look. But it seemed odd in the context of the rest of the sign. As I wondered what happened here, a whole story unfolded in my mind. Here's how I imagine it went down: The ‘p’, in proper position, collided with the edge of the light-colored box, or overlapped it entirely, causing the serif to disappear into the black area. The designer (Spivack's nephew) suggested enlarging the box, but there was not enough room. The sign must fit a standard subway car frame, so its size is prescribed. The designer then suggested eliminating “LAW OFFICES OF”, or eliminating some of the following copy, or reducing its size, but Spivack refused to cede even a single line. “Millions for defense,” cried Spivack, “but not one cent for tribute!” Spivack found the obvious solution: “Just move the up the ‘p’ so it doesn't bump into the edge, stupid!” Spivack's nephew complied. “Looks great!” said Spivack. “Print it!” I have no real reason to believe that most of this is true, but I find it all so very plausible. [ Addendum: Noted typographic expert Jonathan Hoefler says “I'm certain you are correct.” ]

Another Approach to Default Function Parameters

Planet Haskell - 20 hours 32 min ago
Recently, there has been some new discussion around the issue of providing default values for function parameters in Haskell. First, Gabriel Gonzalez showed us his new optional-args library, which provides new types for optional arguments along with heavy syntactic overloading. To follow that, Dimitri Sabadie published a blog post discouraging the use of the currently popular Default type class. These are both good discussions, and as with any good discussion have been lingering around in the back of my head. Since those discussions took place, I’ve been playing with my point in the FRP-web-framework design space - Francium. I made some big refactorings on an application using Francium, mostly extending so called “component” data types (buttons, checkboxes, etc), and was frustrated with how much code broke just from introducing new record fields. The Commercial Haskell group published an article on how to design for extensibility back in March, so I decided to revisit that. It turns out that with a little bit of modification, the approach proposed in designing for extensibility also covers optional arguments pretty well! First, let’s recap what it means to design for extensibility. The key points are: Functions take Settings values, which specify a general configuration. These Settings values are opaque, meaning they cannot be constructed by a data constructor, but they have a smart constructor instead. This smart constructor allows you to provide default values. Provide get/set functions for all configurable fields in your Settings data type, preventing the use of record syntax for updates (which leaks implementation details). Regular Haskell users will already be familiar a pattern that can be seen in point 3: we often use a different piece of technology to solve this problem - lenses. Lenses are nice here because they reduce the surface area of our API - two exports can be reduced to just one, which I believe reduces the time to learn a new library. They also compose very nicely, in that they can be embedded into other computations with ease. With point 3 ammended to use some form of lens, we end up with the following type of presentation. Take a HTTP library for example. Our hypothetical libray would have the following exports: data HTTPSettings httpKeepAlive :: Lens HTTPSettings Bool httpCookieJar :: Lens HTTPSettings CookieJar defaultHTTPSettings :: HTTPSettings httpRequest :: HTTPSettings -> HTTPRequest -> IO Response which might have usage httpRequest (defaultHTTPSettings & httpKeepAlive .~ True) aRequest This is an improvement, but I’ve never particularly liked the reverse function application stuff with &. The repeated use of & is essentially working in an Endo Writer monad, or more generally - a state monad. The lens library ships with operators for working specifically in state monads (of course it does), so let’s use that: httpRequest :: State HTTPSettings x -> HTTPRequest -> IO Response .... httpRequest (do httpKeepAlive .= True) aRequest It’s a small change here, but when you are overriding a lot of parameters, the sugar offered by the use of do is hard to give up - especially when you throw in more monadic combinators like when and unless. With this seemingly simple syntactic change, something interesting has happened; something which is easier to see if we break open httpRequest: httpRequest :: State HTTPSettings x -> HTTPRequest -> IO Response httpRequest mkConfig request = let config = execState mkConfig defaultHttpSettings in ... Now the default configuration has moved inside the HTTP module, rather than being supplied by the user. All the user provides is essentially a function HTTPSettings -> HTTPSettings, dressed up in a state monad. This means that to use the default configuration, we simply provide a do-nothing state composition: return (). We can even give this a name def :: State a () def = return () and voila, we now have the lovely name-overloading offered by Data.Default, but without the need to introduce a lawless type class! To conclude, in this post I’ve shown that by slightly modifying the presentation of an approach to build APIs with extensibility in mind, we the main benefit of Data.Default. This main benefit - the raison d’être of Data.Default - is the ability to use the single symbol def whenever you just want a configuration, but don’t care what it is. We still have that ability, and we didn’t have to rely on an ad hoc type class to get there. However, it’s not all rainbows and puppies: we did have to give something up to get here, and what we’ve given up is a compiler enforced consistency. With Data.Default, there is only a single choice of default configuration for a given type, so you know that def :: HTTPSettings will be the same set of defaults everywhere. With my approach, exactly what def means is down to the function you’re calling and how they want to interpret def. In practice, due to the lack of laws on def, there wasn’t much reasoning you could do about what that single instance was anyway, so I’m not sure much is given up in practice. I try and keep to a single interpretation of def in my libraries by still exporting defaultHTTPSettings, and then using execState mkConfig defaultHTTPSettings whenever I need to interpret a State HTTPConfig. 2015-07-23T00:00:00Z

Supporting HTTP/2

Planet Haskell - 20 hours 32 min ago
We are happy to announce that Warp version 3.1.0 and WarpTLS version 3.1.0 have been released. These versions include the following changes:Warp: the APIs have been cleaned up as explained in Cleaning up the Warp APIs.WarpTLS: RC4 has been removed from defaultTlsSettings since RC4 is not safe anymore. Please read RFC7465: Prohibiting RC4 Cipher Suites and RC4 NOMORE for more information.But the main new feature is HTTP/2 support! The latest versions of Firefox and Chrome support HTTP/2 over TLS. WarpTLS uses HTTP/2 instead of HTTP/1.1 if TLS ALPN(Application-Layer Protocol Negotiation) selects HTTP/2. So, if you upgrade Warp and WarpTLS in your site serving TLS and anyone visits your site with Firefox or Chrome, your contents are automatically transferred via HTTP/2 over TLS.HTTP/2 retains the semantics of HTTP/1.1 such as request and response headers, meaning you don't have to modify your WAI applications, just link them to the new version of WarpTLS. Rather, HTTP/2 redesigned its transport to solve the following issues:Redundant headers: HTTP/1.1 repeatedly transfers almost exactly the same headers for every request and response, wasting bandwidth.Poor concurrency: only one request or response can be sent in one TCP connection at a time(request pipelining is not used in practice). What HTTP/1.1 can do is make use of multiple TCP connections, up to 6 per site.Head-of-line blocking: if one request is blocked on a server, no other requests can be sent in the same connection.To solve the issue 1, HTTP/2 provides a header compression mechanism called HPACK. To fix the issue 2 and 3, HTTP/2 makes just one TCP connection per site and multiplex frames of requests and responses asynchronously. The default number of concurrency is 100.I guess that the HTTP/2 implementors agree that the most challenging parts of HTTP/2 are HPACK and priority. HPACK is used to define reference sets as well as indices and Huffman encoding. During standardization activities, I found that reference sets make the spec really complicated, but does not contribute to the compression ratio. My big contribution to HTTP/2 was a proposal to remove reference sets from HPACK. The final HPACK became much simpler.Since multiple requests and responses are multiplexed in one TCP connection, priority is important. Without priority, the response of a big file download would occupy the connection. I surveyed priority queues but could not find a suitable technology. Thus, I needed to invent random heaps by myself. If time allows, I would like to describe random heaps in this blog someday. The http2 library provides well-tested HPACK and structured priority queues as well as frame encoders/decoders.My interest on implementing HTTP/2 in Haskell was how to map Haskell threads to HTTP/2 elements. In HTTP/1.1, the role of Haskell threads is clear. That is, one HTTP (TCP) connection is a Haskell thread. After trial and error, I finally reached an answer. Streams of HTTP/2 (roughly, a pair of request and response) is a Haskell thread. To avoid the overhead of spawning Haskell threads, I introduced thread pools to Warp. Yes, Haskell threads shine even in HTTP/2.HTTP/2 provides plain (non-encrypted) communications, too. But since Firefox and Chrome require TLS, TLS is a MUST in practice. TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 is a mandate cipher suite in HTTP/2. Unfortunately, many pieces were missing in the tls library. So, it was necessary for me to implement ALPN, ECDHE(Elliptic curve Diffie-Hellman, ephemeral) and AES GCM(Galois/Counter Mode). They are already merged into the tls and cryptonite library.My next targets are improving performance of HTTP/2 over TLS and implementing TLS 1.3.I would like to thank Tatsuhiro Tsujikawa, the author of nghttp2 -- the reference implementation of HTTP/2 and Moto Ishizawa, the author of h2spec. Without these tools, I could not make such mature Warp/WarpTLS libraries. They also answered my countless questions. RFC 7540 says "the Japanese HTTP/2 community provided invaluable contributions, including a number of implementations as well as numerous technical and editorial contributions". I'm proud of being a member of the community.Enjoy HTTP/2 in Haskell!

Casual Hacking With stack

Planet Haskell - 20 hours 32 min ago
Sandboxes are exceptionally helpful not just for working in long-term Haskell projects, but also for casual experiments. While playing around, we tend to install all sorts of packages in a carefree way, which increases a lot the risk of entering cabal hell. While vanilla cabal-install sandboxes prevent such a disaster, using them systematically for experiments mean that, unless you are meticulous, you will end up either with dozens of .hs files in a single sandbox or with dozens of copies of the libraries strewn across your home directory. And no one likes to be meticulous while playing around. In that context, stack, the recently released alternative to cabal-install, can prevent trouble with installing packages in a way more manageable than through ad-hoc sandboxes. In this post, I will suggest a few ways of using stack that may be convenient for experiments. I have been using stack for only a few days, therefore suggestions are most welcome! I won’t dwell on the motivation and philosophy behind stack 1. Suffice it to say that, at least in the less exotic workflows, there is a centralised package database somewhere in ~/.stack with packages pulled from a Stackage snapshot (and therefore known to be compatible with each other), which is supplemented by a per-project database (that is, just like cabal sandboxes) for packages not in Stackage (from Hackage or anywhere else). As that sounds like a great way to avoid headaches, we will stick to this arrangement, with only minor adjustments. Once you have installed stack 2, you can create a new environment for experiments with stack new: $ mkdir -p Development/haskell/playground $ cd Development/haskell/playground $ stack new --prefer-nightly The --prefer-nightly option makes stack use a nightly snapshot of Stackage, as opposed to a long term support one. As we are just playing around, it makes sense to pick as recent as possible packages from the nightly instead of the LTS. (Moreover, I use Arch Linux, which already has GHC 7.10 and base 4.8, while the current LTS snapshot assumes base 4.7.) If this is the first time you use stack, it will pick the latest nightly; otherwise it will default to whatever nightly you already have in ~/.stack. stack new creates a neat default project structure for you 3: $ ls -R .: app LICENSE new-template.cabal Setup.hs src stack.yaml test ./app: Main.hs ./src: Lib.hs ./test: Spec.hs Of particular interest is the stack.yaml file, which holds the settings for the local stack environment. We will talk more about it soon. flags: {} packages: - '.' extra-deps: [] resolver: nightly-2015-07-19 As for the default new-template.cabal file, you can use its build-depends section to keep track of what you are installing. That will make stack build (the command which builds the current project without installing it) to download and install any dependencies you add to the cabal file automatically. Besides that, having the installed packages noted down may prove useful in case you need to reproduce your configuration elsewhere 4. If your experiments become a real project, you can clean up the build-depends without losing track of the packages you installed for testing purposes by moving their entries to a second cabal file, kept in a subdirectory: $ mkdir xp $ cp new-template.cabal xp/xp.cabal $ cp LICENSE xp # Too lazy to delete the lines from the cabal file. $ cd xp $ vi Dummy.hs # module Dummy where $ vi xp.cabal # Adjust accordingly, and list your extra deps. You also need to tell stack about this fake subproject. All it takes is adding an entry for the subdirectory in stack.yaml: packages: - '.' # The default entry. - 'xp' With the initial setup done, we use stack build to compile the projects: $ stack build new-template-0.1.0.0: configure new-template-0.1.0.0: build fmlist-0.9: download fmlist-0.9: configure fmlist-0.9: build new-template-0.1.0.0: install fmlist-0.9: install xp-0.1.0.0: configure xp-0.1.0.0: build xp-0.1.0.0: install Completed all 3 actions. In this test run, I added fmlist as a dependency of the fake package xp, and so it was automatically installed by stack. The output of stack build goes to a .stack-work subdirectory. With the packages built, we can use GHCi in the stack environment with stack ghci. It loads the library source files of the current project by default: $ stack ghci Configuring GHCi with the following packages: new-template, xp GHCi, version 7.10.1: http://www.haskell.org/ghc/ :? for help [1 of 2] Compiling Lib ( /home/duplode/Development/haskell/playground/src/Lib.hs, interpreted ) [2 of 2] Compiling Dummy ( /home/duplode/Development/haskell/playground/xp/Dummy.hs, interpreted ) Ok, modules loaded: Dummy, Lib. *Lib> import qualified Data.FMList as F -- Which we have just installed. *Lib F> -- We can also load executables specified in the cabal file. *Lib F> :l Main [1 of 2] Compiling Lib ( /home/duplode/Development/haskell/playground/src/Lib.hs, interpreted ) [2 of 2] Compiling Main ( /home/duplode/Development/haskell/playground/app/Main.hs, interpreted ) Ok, modules loaded: Lib, Main. *Main F> Dependencies not in Stackage have to be specified in stack.yaml as well as in the cabal files, so that stack can manage them too. Alternative sources of packages include source trees in subdirectories of the project, Hackage and remote Git repositories 5: flags: {} packages: - '.' - 'xp' - location: deps/acme-missiles-0.3 # Sources in a subdirectory. extra-dep: true # Mark as dep, i.e. not part of the project proper. extra-deps: - acme-safe-0.1.0.0 # From Hackage. - acme-dont-1.1 # Also from Hackage, dependency of acme-safe. resolver: nightly-2015-07-19 stack build will then install the extra dependencies to .stack-work/install. You can use stack solver to chase the indirect dependencies introduced by them. For instance, this is its output after commenting the acme-dont line in the stack.yaml just above: $ stack solver --no-modify-stack-yaml This command is not guaranteed to give you a perfect build plan It's possible that even with the changes generated below, you will still need to do some manual tweaking Asking cabal to calculate a build plan, please wait extra-deps: - acme-dont-1.1 To conclude this tour, once you get bored of the initial Stackage snapshot all it takes to switch it is changing the resolver field in stack.yaml (with nightlies, that amounts to changing the date at the end of the snapshot name). That will cause all dependencies to be downloaded and built from the chosen snapshot when stack build is next ran. As of now, the previous snapshot will remain in ~/.stack unless you go there and delete it manually; however, a command for removing unused snapshots is in the plans. I have not tested the sketch of a workflow presented here extensively, yet what I have seen was enough to convince me stack can provide a pleasant experience for casual experiments as well as full-fledged projects. Happy hacking! Update: There is now a follow-up post about the other side of the coin, Migrating a Project to stack. For that, see Why is stack not cabal?, written by a member of its development team.↩ For installation guidance, see the GitHub project wiki. Installing stack is easy, and there are many ways to do it (I simply got it from Hackage with cabal install stack).↩ To create an environment for an existing project, with its own structure and cabal file, you would use stack init instead.↩ In any case, you can also use stack exec -- ghc-pkg list to see all packages installed from the snapshot you are currently using. That, however, will be far messier than the build-depends list, as it will include indirect dependencies as well.↩ For the latter, see the project wiki.↩ Comment on GitHub (see the full post for a reddit link) Post licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. 2015-07-23T02:30:00Z Daniel Mlot

On the unsafety of interleaved I/O

Planet Haskell - 20 hours 32 min ago
One area where I'm at odds with the prevailing winds in Haskell is lazy I/O. It's often said that lazy I/O is evil, scary and confusing, and it breaks things like referential transparency. Having a soft spot for it, and not liking most of the alternatives, I end up on the opposite side when the [...]

yesod-devel

Planet Haskell - 20 hours 32 min ago
Yesod-develA new development server is upon us. It's name is yesod-devel.This post is about yesod-devel which is my Google Summer of Code project and not the current yesod-devel that is part of the yesod framework. It's not yet available and is still under development, meaning a lot of things in this post may change.yesod-devel is a development server for haskell web applications that are WAI compliant.What we expect from the application.This is my opinion of what I expect from the web application and this may therefore change depending on what the community thinks. I think this design is good and losely coupled and leaves a lot of freedom to the web application.At the heart of your application (the root of your web application) we expect you to have an Application.hs file which holds the Appliaction module. This is the file pointed to by your main-is section of the .cabal file.:This Application.hs file holds the main function which fires up a warp server at a an address and port specified in an environment variable. Yesod devel will read everything it needs from the web application from environment variables and not from a config file.It is the responsibility of the web application to set environment variables(setEnv). This way yesod-devel is very losely coupled to the web application. That is, we(yesod-devel) will not have to specify the names, paths of your config files or which serialization format it will use.The environment variables we currently need are:* haskellWebAddress="/localhost" * haskellWebPort=""What you should expect from yesod-devel.Automatic source and data file discovery.You shouldn't expect to tell yesod-devel where your source files or data files (hamelet files and so forth) are as long as your web application knows where everything is. All you need to do is call the yesod-devel binary inside your app's root.Building and running your code.yesod-devel when run in your web application's working directory will run build and run your application on localhost:3000.Automatic code reloading.Yesod-devel supports automatic code reloading for any file modified in the current working directory. This is more proof of just how losely coupled yesod-devel will be from your application.Newly added files don't trigger a recompile and neither do deleted files. However, file modifications do trigger a recompile. This is a deliberate design choice. Text editors as well as other programs keep adding and removing files from the file system and if we listened for any randomly created file or deleted file to trigger a recompile we would end up triggering useless recompiles.This however means there's a trade-off. For being so losely coupled we have to manually restart yesod-devel everytime we add or delete files.Reverse proxying.Yesod-devel will start listening on the address and port specified in your environment variables haskellWebAddress and haskellWebPort respectively and reverse proxy it to your localhost:3000.Report error messages to the browser.Yesod-devel will report error messages from ghc to the web browser on localhost:3000.Command line arguments.Currently yesod-devel takes no command line arguments.However, in the plans are the following.--configure-with --no-reverse-proxying--show-iface fileName.hsYou should be fine without passing any of these arguments unless you have a special reason to.Currently yesod-devel will configure your web application with the following flags to cabal.-flibrary-only--disable-tests--disable-benchmarks-fdevel--disable-library-profiling--with-ld=yesod-ld-wrapper--with-ghc=yesod-ghc-wrapper--with-ar=yesod-ar-wrapper--with-hc-pkg=ghc-pkgI assume that these arguments are self explanatory.
Syndicate content