Originally published at the Pythian blog.

Following up on my threat of last week, I released Test::Wrapper on CPAN.

If you read my previous blog entry, you know that one of the big gotchas of the wrapping gymnastics I was doing was that it was utterly #@$%$# up Test::Builder’s internal states. Thus, at that point, it was either run TAP tests, or use Test::Wrapper, but don’t do both at the same time. Not the most God-awful limitation ever, perhaps, but still not very cool.

Since then, I’ve taken a second look at the problem, and realized that this limitation can not only be overcome, but in a surprisingly easy manner.

The trick is to know that Test::Builder states are kept in a global object, $Test::Builder::Test. Since all the information is kept there, for our meddling to become benign all we have to do is a classic “distract, switch, butcher & reinstate” maneuver:

# slightly simplified from Test::Wrapper's guts

# $original_test is the full name of the test.
#  E.g.  'Test::More::like'

my $wrapped = $original_test;

# get the original test's coderef
my $original_test_ref = eval '\&' . $original_test;

# get its prototype
my $proto = prototype $original_test_ref;
$proto &&= "($proto)";

# okay, let's wrap that test...
eval <<"END_EVAL";

sub $wrapped $proto {

    # magic! we make a local copy of $Test::Builder::Test
    # that we can mangle in every way we want

    local \$Test::Builder::Test = {
        %\$Test::Builder::Test
    };

    my \$builder = bless \$Test::Builder::Test, 'Test::Builder';

    # we change the testing plan to a single test (this one)
    \$builder->{Have_Plan}        = 1;
    \$builder->{Have_Output_Plan} = 1;
    \$builder->{Expected_Tests}   = 1;

    # we capture all the output channels
    my ( \$output, \$failure, \$todo );
    \$builder->output( \\\$output );
    \$builder->failure_output( \\\$failure);
    \$builder->todo_output( \\\$todo );

    # call the original test, which will interact with our
    # modified $Test::Builder::Test
    \$original_test_ref->( \@_ );

    # ... and harvest the output and populate an object with it
    return Test::Wrapper->new(
        output => \$output,
        diag => \$failure,
        todo => \$todo,
    );

    # that's it! leaving the subroutine will make our
    # $Test::Builder::Test get out of scope, which
    # will un-hide the original $Test::Builder::Test
    # unaltered by what happened here
}

END_EVAL

And that, boys and girls, is the crux of Test::Wrapper. The rest is just window-dressing and API goodness.

cata_mugshots.png

wipe spittle off eyebrows

If you are, you might like the little greasemonkey script (available on userscript.org and github) that I churned.

The script finds the AUTHORS/CONTRIBUTORS section of POD pages on http://search.cpan.org and add Gravatar pictures where it finds author email addresses. The picture on the right is an example of what it does to the main Catalyst CONTRIBUTORS section.

I totally blame Tim Bunce for this one.

You see, I was happily minding my business today, until I got sight of Tim's tweet bemoaning the fact that Test::Difference tests can't easily be used outside of a test harness. Darn him, that's exactly the kind of happy little puzzle I can't resist.

So I began to think about it. Of course, the Right Solution is probably to add alternative non-TAP-tied functions to the test modules themselves. But what if you just want to quickly leverage the module's functionality without having to re-arrange its innards? Well, most test modules use Test::Builder, so there's surely ways to twist that to our advantage. After a hour or two of hacking, I think I got one.

My little experience is named Test::Wrapper, and as usual can be found on github. In a nutshell, Test::Wrapper takes the tests that you want to use and gleefully bastardize the underlaying Test::Builder mechanisms such that nothing is output. Instead, the tests return an object that contains it all.

For example, if we want to use eq_or_diff from Test::Differences, we can do

    use 5.10.0;

    use Test::Differences;

    use Test::Wrapper qw/ eq_or_diff /;

    my $test = eq_or_diff "foo" => "bar";

    say $test->is_success ? 'pass' : 'fail';

    # yes! prints the whole diff table
    say "output: ", $test->diag;

    # also come with overloading voodoo

    say $test ? 'pass' : 'fail';

    say "output: $test";

As mentioned above, this comes with the price of totally messing up with Test::Builder, so using it intermixed with "real" tests is not an option. And it has doubtlessly a thousand gotchas I didn't think about. But, still, I must say that I like it despite (or because) its devious hijacking nature. I might even push it to CPAN it if proves to be useful to anyone (myself included).

XPathScript Reborn

| No Comments | No TrackBacks

A long, long time ago, Matt Sergeant (of SpamAssassin fame) came up with an XML application server for Apache called AxKit. It was quite nifty, and offered many ways to transform XML documents. One of them was an home-brewed stylesheet language called XPathScript, which very quickly caught my fancy. It had a very Perlish way of doing things and was feeling infinitely more ergonomic to me than, say, the visual tag-storm that is XSLT. So, quite naturally, it was not long before I found myself wanting to use it not only in the context of an AxKit, but as a generic XML transformer. A little hacking happened to decouple the core engine from its Apache roots, and XML::XPathScript was born.

That module served me quite well throughout the years, but for some time now I had this plan of doing a clean rewrite patiently sitting on my back-burner. There are a few new features that I wanted to wedge in (an easier, cleaner way to create and extend stylesheets, a way for the transformation elements to pass information back and forth), and other infrastructure details (like the way the current XPathScript definition of 'template' and 'stylesheet' is the inverse of what one would expect). But, of course round tuits are rare, and that project lingered...

... but lingers no more. This week I had a smashing staycation, and thanks to a very understanding wife, I was able to indulge in the necessary hacking sessions to get the ground work done. The result is not on CPAN yet, but can be perused on GitHub.

As an example is worth a thousand pages of documentation, let's say that you want to turn the piece of docbook-ish xml

<section title="Introduction">
<para>This is the first paragraph.</para>
<para>And here comes the second one.</para>
</section>

into the html

<h1>Introduction</h1>
<p class="first_para">This is the first paragraph.</p>
<p>And here comes the second one.</p>

Here a XML::XSS script that will do the trick:

use XML::XSS;

my $xss = XML::XSS->new;

$xss->set(
    section => {
        showtag => 0,
        intro   => sub {
            my ( $self, $node ) = @_;
            $self->stash->{seen_para} = 0;    # reset flag
            return '<h1>' . $node->findvalue('@title') . '</h1>';
        },
    } );

$xss->set(
    para => {
        pre   => '<p>',
        post  => '</p>',
        process => sub {
            my ( $self, $node ) = @_;

            $self->set_pre('<p class="first_para">')
                unless $self->{seen_para}++;

            return 1;
        },
    } );

print $xss->render( <<'END_XML' );
<doc>
    <section title="Introduction">
    <para>This is the first paragraph.</para>
    <para>And here comes the second one.</para>
    </section>
</doc>
END_XML

The code is still very young and has more bugs that I dare to count, but it's getting to the point where it's usable. The next things that are on my plate are:

  • Make the documentation suck less.

  • Re-introduce the templates. So that

$xss->get('section')->set_intro( sub {
    my ( $self, $node ) = @_;
    $self->stash->{seen_para} = 0;    # reset flag
    return '<h1>' . $node->findvalue('@title') . '</h1>';
} );

can become

$xss->get('section')->set_intro( xsst q{
    <% $r->stash->{seen_para} = 0; %>
    <h1><%@ @title %></h1>
} );
  • Re-introduce the command-line transforming command.

  • Add the ability to use XPath expressions as rendering rules.

  • And much, much more...

Dist::Zilla autocomplete

| No Comments | No TrackBacks

Does anyone know of a Yak Shaving Anonymous association hackers addicted to shearing Tibetan bovines could join?

Anyway, here are two little things I hacked on top of Dist::Zilla that peeps might find useful.

The first, as hinted by the blog entry's title, is a direct adaptation of Aristotle's perldoc-complete for dzil.

$ dzil <tab>
build     install   new       plugins   rjbsver   smoke     xtest     
clean     listdeps  nop       release   run       test

The second is actually the one that started that round of shaving for me. As there is about a gazillion Dist::Zilla plugins, I wanted to have a quick way to see all the plugins installed on a specific machine. Enter a new dzil sub-command: plugins.

$ dzil plugins
[ lotsa plugins ]
MatchManifest - Ensure that MANIFEST is correct
MetaConfig - summarize Dist::Zilla configuration into distmeta
MetaJSON - produce a META.json
* MetaNoIndex - Stop CPAN from indexing stuff
MetaProvides - Generating and Populating 'provides' in your META.yml
MetaProvides::Class - Scans Dist::Zilla's .pm files and tries to identify classes using Class::Discover.
MetaProvides::FromFile - In the event nothing else works, pull in hand-crafted metadata from a specified file.
MetaProvides::Package - Extract namespaces/version from traditional packages for provides
* MetaResources - provide arbitrary "resources" for distribution metadata
MetaTests - common extra tests for META.yml
MetaYAML - produce a META.yml
[ still lotsa plugins ]

The plugins marked with an asterix are used by the current dist.ini. Also, give a plugin name to the sub-command, and it'll act as
perldoc Dist::Zilla::Plugin::<Plugin Name>. The sweet thing is, autocomplete also works there:

$ dzil plugins Pod<Tab>
PodCoverageTests  PodSyntaxTests    PodVersion        PodWeaver

Both patches are available on my Github fork of Dist::Zilla. Enjoy!

Wherever I May Roam

| 2 Comments | No TrackBacks
    Roamer, wanderer
    Nomad, vagabond
    Call me what you will

    $ENV{LC_ALL} = "anywhere";
    my $time = localtime;
    say {$anywhere} my $mind;
    local *anywhere = sub { ... };

    Anywhere I roam
    Where I 'git ghclone environment' is $HOME

        # 'grep may_roam($_) => @everywhere', 
        #                with apologies to Metallica

Laziness and a severe addiction to yak shaving conspire to constantly make me tweak configurations and hack scripts to make my everyday editing / shell / development experience as holistic as possible. Unfortunately the same laziness, combined with my constant hopping between home and $work computers, severely gets in the way of effectively using those optimizations. Indeed, although I have those nifty toys installed here and there, because they are not uniformly installed everywhere I constantly find myself using the machines' functional lowest common denominator.

To fix that, I've began to dump all my environment's custom configurations, plugins, tweaks and hacks on Github. That way, I can import my whole baseline toolbox on any given box with a simple

git clone git://github.com/yanick/environment.git

As an added bonus, it also provides me with a public platform to show off all my little tricks to the world -- and a way to potentially let other peeps fork it and customize it to fit their own needs.

However, importing the environment is only half the battle; it also has to be properly installed. On one hand, the installation shouldn't be manual, as laziness would slip in again and ensure that it would never happen. On the other, I'm too wary of unintentional clobbering to leave everything to an installation script. So I decided to take the middle road and have a set of passive Perl tests verifying if the various components are applied to the environment. For every tweak that I make, I also write a short test that checks that it is installed at the proper place. Thanks the goodness of Perl's test harness, a quick 'prove t' is all that is needed to let me know if the current environment is in sync with the baseline:

[yanick@enkidu environment (master)]$ prove t
t/general.t ... 1/? 
#   Failed test 'cp bash/mine.bash ~/.bash/mine.bash'
#   at t/general.t line 15.
# +---+---------------------------------------------+---+-----------------------------------------+
# |   |Got                                          |   |Expected                                 |
# | Ln|                                             | Ln|                                         |
# +---+---------------------------------------------+---+-----------------------------------------+
# | 16|source ~/.bash/git-completion.bash           | 16|source ~/.bash/git-completion.bash       |
# | 17|PS1='[\u@\h \W$(__git_ps1 " (%s)")]\$ '      | 17|PS1='[\u@\h \W$(__git_ps1 " (%s)")]\$ '  |
# | 18|                                             | 18|                                         |
# * 19|export PATH="$PATH:~/work/git-achievements"  *   |                                         |
# * 20|alias git=git-achievements                   *   |                                         |
# | 21|                                             | 19|                                         |
# * 22|\n                                           *   |                                         |
# | 23|###########################                  | 20|###########################              |
# | 24|# Misc                                       | 21|# Misc                                   |
# | 25|###########################                  | 22|###########################              |
# +---+---------------------------------------------+---+-----------------------------------------+
# | 42|                                             | 39|                                         |
# | 43|complete -C perldoc_complete perldoc         | 40|complete -C perldoc_complete perldoc     |
# | 44|complete -C perldoc_complete pod             | 41|complete -C perldoc_complete pod         |
# |   |                                             * 42|\n                                       *
# |   |                                             * 43|\n                                       *
# |   |                                             * 44|# aliases                                *
# |   |                                             * 45|source ~/.bash/aliases                   *
# +---+---------------------------------------------+---+-----------------------------------------+
[ etc... ]

It's not a perfect system, and there's still a lot of polishing that can be done, but I've been using it for a few weeks and it has already been quite useful.

Perl Chowder for the Boys

| 4 Comments | No TrackBacks

At $work, I have a lot of people interested in Perl, but not interested enough to keep abreast of all the annoucements and happenings in Perl-land. Which is a shame because, let's face it, the view is always at its most breathtaking from the apex of the unfurling wave.

So, like a good budgie papa, I've taken unto myself to feed them with a bi-weekly review of all things new and Perlish. After all, we already have weekly (and very, very good) reviews of the database and sysadmin scenes, and I'd be darned if I was going to let them have all the fun.

The first and second editions are already out. Unlike most of my other blog entries, I'll not echo those here considering that most people catch Fearful Symmetry via the Perl news aggregators and, well, injecting a review of blog entries in blog aggregators would be a mite recursively pointless.

There were two things I wanted to do for some time now. The first was to come up with a way to quickly and easily set up a DarkPAN mirror, so that we would have more control over our dependency chain at work. The second one was to make a portable CPAN proxy service, so that I can always have access to my favorite modules, even if the machine I'm working on has no Internet access. Last week, I finally had a few rount tuits to spend on that type of background itch, and the result is dpanneur (for dépanneur, French Canadian for convenience store).

Installation

As it stands, dpanneur is a very thin Catalyst application gluing together the goodiness of CPAN::Cache and MyCPAN::App::DPAN, and throwing in Git as the archive manager.

To get it running, first fetch it from Github

$ git clone git://github.com/yanick/dpanneur.git

then check that you have all the dependencies

$ perl Makefile.PL

and run the scrip that will create the module repository

$ ./script/create_repo

For now, the module repository is hard-coded to be in the subdirectory cpan of dpanneur. A branch called proxy is created and checked out. Eventually, I'll use GitStore to push newly fetched modules to the repository, but for the time being if dpanneur is to be used as a proxy, that branch must remain the one being checked out.

All that is left is to fire up the server in whichever mode you prefer (single-thread test server would do nicely for now)

$ ./script/dpanneur_server.pl

and there you are, running your first dpanneur. Congrats! :-)

Using it as a caching proxy

You can use the server as a caching proxy, either for its own sake, or to seed the DarkPAN branches. To do that, you just have to configure your cpan client to use http://yourmachine:3000/proxy:

$ cpan
cpan[1]> o conf urllist = http://localhost:3000/proxy
cpan[2]> reload index
cpan[3]> install Acme::EyeDrops
Running install for module 'Acme::EyeDrops'
Running make for A/AS/ASAVIGE/Acme-EyeDrops-1.55.tar.gz
Fetching with LWP: 
    http://localhost:3000/proxy/authors/id/A/AS/ASAVIGE/Acme-EyeDrops-1.55.tar.gz
etc..

As the modules are downloaded, they are also saved and committed within the repo

[dpanneur]$ cd cpan

[cpan (proxy)]$ git log -n 3
commit d065ad152f2204295334c5475104a3da517b6ae1
Author: Yanick Champoux <yanick@babyl.dyndns.org>
Date:   Wed Mar 10 20:32:52 2010 -0500

    authors/id/A/AS/ASAVIGE/Acme-EyeDrops-1.55.tar.gz

commit e8d2e83d1b16e2e0713d125f9a4bd2742681f859
Author: Yanick Champoux <yanick@babyl.dyndns.org>
Date:   Wed Mar 10 20:31:42 2010 -0500

    authors/id/D/DC/DCONWAY/Acme-Bleach-1.12.tar.gz

commit 7e0b4b600bac8424c519199ee96dc56ffbb177eb
Author: Yanick Champoux <yanick@babyl.dyndns.org>
Date:   Wed Mar 10 20:30:47 2010 -0500

    modules/03modlist.data.gz

Using it as a DarkPAN server

Enabling DarkPAN repos is not much more involving. All we have to do is to create a branch with the modules we want and have the 'dpan' utility bundled with MyCPAN::App::DPAN generate the right files for us.

To continue with the example of the previous section, let's say that we want a DarkPAN branch containing Acme::EyeDrops, but not Acme::Bleach. Then we'd do

                        # only necessary if you are running 
                        # the server while you work on the branch
[dpanneur]$ git clone cpan cpan-work   

[dpanneur]$ cd cpan-work

                        # branch just before we imported Acme::Bleach
[cpan-work (proxy)]$ git branch pictoral 7e0b4b600bac8424c519199ee96dc56ffbb177eb

[cpan-work (proxy)]$ git checkout pictoral
Switched to branch 'pictoral'

                        # cherry-pick the Acme::EyeDrops commit
[cpan-work (pictoral)]$ git cherry-pick d065ad152f2204295334c5475104a3da517b6ae1

                        # rebuild the module list
[cpan-work (pictoral)]$ dpan

                        # commit the new 02packages.details.txt.gz
[cpan-work (pictoral)]$ git add .
[cpan-work (pictoral)]$ git commit -m "dpan processing"

                        # push back to the mothership
[cpan-work (pictoral)]$ git push origin pictoral

And that's it. Now point the cpan client to http://yourmachine:3000/pictoral, and you'll get the limited mirror.

cpan[1]> o conf urllist http://localhost:3000/pictoral                                               
cpan[2]> reload index

cpan[3]> i Acme::EyeDrops
Strange distribution name [Acme::EyeDrops]
Module id = Acme::EyeDrops
    CPAN_USERID  ASAVIGE (Andrew J. Savige <asavige@cpan.org>)
    CPAN_VERSION 1.55
    CPAN_FILE    A/AS/ASAVIGE/Acme-EyeDrops-1.55.tar.gz
    UPLOAD_DATE  2008-12-02
    MANPAGE      Acme::EyeDrops - Visual Programming in Perl
    INST_FILE    /usr/local/share/perl/5.10.0/Acme/EyeDrops.pm
    INST_VERSION 1.55


cpan[4]> i Acme::Bleach
Strange distribution name [Acme::Bleach]
No objects found of any type for argument Acme::Bleach
originally published in the Pythian Blog.
So mothers keep your hackers at home
Don't let them journey all alone
Tell them this world is full of danger
And to shun the repositories of strangers
        - The Tag Set of Strangers, 
                (with apologies to) Nick Cave and the Bad Seeds

One of the things I love about Git is how I can add branches from remote repositories in mine at will without fearing to mess up anything. The remote branches will not clash with my own, even if they share the same names, because they are referenced as repository/branch. However, as for anything else, you can still poke yourself in the eye if you try hard enough:

$ git remote
bob

$ git checkout -b bob/baroque
Switched to a new branch 'bob/baroque'

$ git fetch bob
From ../bob
* [new branch]      baroque    -> bob/baroque

$ git checkout bob/baroque
warning: refname 'bob/baroque' is ambiguous.
Already on 'bob/baroque'

Here, I created a local branch called bob/baroque, which will end up having the same name as the branch baroque imported from Bob's repository. Confusing, but not the end of the world. I can still see the difference branches with gitk and access the local and remote branches via git checkout remotes/bob/baroque and git checkout remotes/bob/baroque.[1] The lesson to take out from this, of course, is simply not to use slashes in branch names and sidestep the whole issue.

[1] The fiendish-minded reader probably wonder at this point what would happen if I was to create another local branch called remotes/bob/baroque. I would deserve to be shot, that's what would happen.

With tags, surprisingly, the matter is much more prickly. Not only tags are not kept to the ''namespace'' of their repository of origin, but git fetch has a very dangerous default behavior:

$ git log -n 1 somework
commit 483d008c6207554236232fef4e8cd22cfb4b9bb8
Author: Yanick Champoux <yanick@babyl.dyndns.org>
Date:   Wed Mar 3 21:14:43 2010 -0500

    some work on my repo

$ git fetch --tags bob
From ../b
- [tag update]      somework   -> somework

$ git log -n 1 somework
commit 5f7f8eddd2d44e359fe8bc0d1a2f1642d073cad9
Author: Yanick Champoux <yanick@babyl.dyndns.org>
Date:   Wed Mar 3 21:15:25 2010 -0500

    some work from Bob

Yes, if there is a conflict, fetch --tags will silently clobber the local tags with their remote counterparts. Hope you remember all the commits you painstakingly tagged in the last six months... This behavior is so mind-bogglingly dangerous that, to this day, I wonder if I'm not missing something obvious.

Now, granted, it's fairly rare to import tags from remote repositories. But there are instances, like when adding the gitpan history of your module to your repository, where it's relevant. In those cases, for your own sake, make sure that the remote tags won't clash with yours. Even better, don't use fetch --tags at all. Instead, do something akin to:

$ git ls-remote --tags bob | perl -nae'$F[1]=~s#refs/tags#bob#; `git tag $F[1] $F[0]`'

This will name remote tags using the repository/tag convention. Even better, if the tag already exist, Git will complain and it won't be clobbered by the new version.

The Itch

For many, CPAN is a Canadian Prairies-sized field of modules where it's darn hard to separate the wheat from the chaff.

While the CPAN Ratings service is the principal and official way CPAN is trying to rank its distributions, it (for me) doesn't quite scratch the itch because

  1. not all distributions have reviews.

  2. even when there are reviews, they generally don't answer the next question: what should I use, instead?.

The Dream

Consequently, for a while now I've been playing with ideas on how the rating could be improved. What I came up with so far is a very minimal system going straight for the goods, where a rating would consist of

  1. The rating proper, which can be one of three values: 'thumb up', 'thumb down', or 'neutral'.

  2. If you give the distribution a thumb down (or even if you give it a thumb up, for what matters), you can recommend another distribution to be used instead.

  3. An accompanying short comment (140 characters or less so that it's Tweeter-ready. Lengthier, proper reviews can be done via CPAN Ratings).

Aaaand... that's it. Not exactly mind-blowing, but it's so simple it could actually work.

JFDI, you say?

And now, since I had a three-day weeks, I decided to give the idea a try and implement a prototype. Because I had only so many hours to devote to the project (hey, it was Valentine Day, after all), I've built it as a REST service. That way I didn't have to spend any time about prettiness and, if the idea does to catch on, it can easily be grafted to a web site, IRC/IM bot, phone service, search.cpan.org (well, I can dream big, can't I?), etc.

The code is on Github. It's all rather untidy, but it's (roughly) functional. Let's have a little tour of the application via the example REST client 'cpanvote.pl' included in the repo, shall we?

First, we need an account, which can be created via the client:

$ cpanvote.pl --register --user max --password min

(and yes, this way of creating users is rather brain-dead, but this is only a rought prototype, so it'll do for now)

Once an account is created, reviews are as simple as

$ cpanvote.pl --user max --password min XML-XPathScript --yeah

or

$ cpanvote.pl --user yanick --password foo Games::Perlwar --meh \
      --comment "could use a little Catalyst love"

or

$ cpanvote.pl --user yanick --password foo Dist-Release --neah \
      --instead Dist-Zilla \
      --comment "nice try, but RJS is just better at it"

For the time being, I've only implemented very simple per-distribution results, which can be queried via any browser:

$ lynx -dump http://localhost:3000/dist/Dist-Release/summary
---
comments:
    - nice try, but RJS is just better at it
    - cute
instead:
    - Dist-Zilla
vote:
    meh: ~
    neah: 1
    yeah: 1


$ lynx -dump http://localhost:3000/dist/Dist-Release/detailed
---
-
    comment: nice try, but RJS is just better at it
    instead: Dist-Zilla
    vote: -1
    who: yanick
-
    comment: cute
    vote: +1
    who: max

Test Server

For the curious, I have an instance of the application running at http://babyl.dyndns.org:3000 (cpanvote.pl --host babyl.dyndns.org:3000 ...). It's running a single-thread Catalyst with debug information using a SQLite backend on my home machine, which has a rather pathetic bandwidth throughput, so please be gentle and don't be too overly surprised if it goes down.

Whaddya think?

This is the moment where I turn to the audience and prod to see if I might be on to something, or if I'd be better to stop talking now. In other words, what's your thought on this: --yeah, --neah or --meh? :-)

Further considerations

Random thoughts for the next steps (assuming that there will be a next step).

  • review accounts could potentially be PAUSE-based.

  • Give peeps the opportunity to submit tags for the module alongside their review, and let the taxonomy short itself à la Deli.cio.us.

  • We could go meta all the way and vote on reviewers as well, which could give their opinion more weight.

Recent Comments

  • Yanick: Not at all. I just forgot to upgrade to the read more
  • Aristotle Pagaltzis: You stuck with a version of perldoc-complete that’s missing quite read more
  • Yanick: Ooops. The order has been corrected. And the tag shall read more
  • Yanick: We should have a feed for it Real Soon. As read more
  • szabgab.com: That's awesome. I just have to find a way to read more
  • Clayton Scott: Yanick, the links to the lovely "Shuck and Awe" are read more
  • Yanick Paquette: Je suis d'accord avec tout ca d’ailleurs mes enfants en read more
  • Sid Burn: There exists two different types of tags. Local Tags and read more
  • Yanick: Excellent. Thank for the example, it shall be put to read more
  • Jakub Narebski: >> but you can always configure refspec explicitly, so that read more