Crawlie (the crawler)

Crawlie is meant to be a simple Elixir library for writing decently-peforming crawlers with minimum effort. It’s a work in progress, but it should become relatively usable by the end of 2016.
Usage example
See the crawlie_example project.
Inner workings
Crawlie uses Elixir’s GenStage to parallelise the work. Most of the logic is handled by the UrlManager, which consumes the url collection passed by the user, receives the urls extracted by the subsequent processing, makes sure no url is processed more than once, makes sure that the “discovered urls” collection is as small as possible by traversing the url tree in a roughly depth-first manner.
The urls are requested from the UrlManager by a GenStage Flow, which in parallel fetches the urls using HTTPoison, and parses the responses using user-provided callbacks. Discovered urls get sent back to UrlManager.
Here’s a rough diagram:
Configuration
See the docs for supported options.
Planned features
TODO
Installation
The package can be installed as:
Add
crawlie
to your list of dependencies inmix.exs
:def deps do [{:crawlie, "~> 0.2.0-alpha1"}] end
Ensure
crawlie
is started before your application:def application do [applications: [:crawlie]] end