[ all categories
]
There have been releases of the software on Sorcerer's Isle, mostly to
update URLs and fix the lack of documentation in the download packages.
The reason for the URL changes is the migration of repositories from GitHub
onto my own server, details of which will follow this quick summary of the releases.
If you want more details on the releases, read the full version of this post, but in summary...
Lucee on Jetty is now at v0.7 and both Jetty 9.4.44 and Jetty 10.0.7 have been bundled with Lucee 5.3.8
cfPassphrase had been sitting on rc0.2 for longer than necessary and was promoted to v0.2 -
there are no code changes - no need to update current projects.
QueryParam Scanner v0.8 has been formally merged/released - again, no code changes from rc0.8,
just another case of life getting in the way at the time when this should have otherwise happened.
cfRegex v0.1.003 has been re-released as v0.3 to keep versioning consistent amongst projects,
and actually has its own repository now, along with simplified packages and documentation.
Scatter v0.1.1 does nothing more than add documentation and update the repository URL,
so again nothing to update for existing projects.
Why
So that's the quick overview, but why?
The primary reason for all of these releases was to change their repository URLs, from
https://github.com/boughtonp/[reponame]
to https://code.sorcerersisle.com/[reponame]
.
This is not a total move away from GitHub - it is still used for issue tracking
(for the time being), and I'll most likely still push code there when full
releases are made - but it will only be a secondary source/mirror.
UPDATE: I have since fully moved off GitHub, use https://codeberg.org/boughtonp/[reponame]
for backup repos or to report issues.
The motivation for doing this is to reduce dependency on centralised proprietary services,
and removing the unwanted requirement to have JavaScript enabled.
This move would have happened a great deal sooner, but when I looked into the various Git
repository browsers available, I found a lot of bloated software with features
I neither needed nor wanted, hundreds of megabytes of code and dependencies,
no ability to meaningfully change how it looks, and so on.
Long story short: irritated by how everything sucked, whilst also looking for
a decent project to extend my Python skills, I created a
lightweight and themeable Git repository browser.
GitFrit
GitFrit is capable of running on CentOS 7, only needing Python 3.6 (or newer) and Git 2.24 (or newer).
The source code is currently ~0.5MB (half of that is the included templating library, which I'd like to streamline).
GitFrit is tiny in comparison to almost everything else available - even git-web with ~0.3MB of source is only slightly smaller,
and that has its markup intertwined with Perl, preventing it from being themeable.
GitFrit is not quite ready for release yet - I took shortcuts to get it up onto
Sorcerer's Isle sooner, and those now need to be cleaned up into configuration
options, all of which needs to be documented, plus there's a couple more features
I'd like it to have first.
When those changes (and thus a release) will happen is uncertain - I need to shift focus
back onto other priorities, and unless there's significant interest in GitFrit, it may take me a
while to get back to it and spend the time to make it publicly available.
If you are interested, do send me an email so I can let you know when it's ready.
Scatter is a JavaScript library for randomly arranging HTML elements within a containing element.
It is deliberately lightweight, easy to integrate, and without dependencies.
The initial script was written to provide a scattered polaroid effect for an in-page gallery,
as a reaction to the complexity found in a couple of existing libraries - both of those
other libraries expected JSON files containing the image URLs, which was parsed and
iterated through to generate specific markup, and neither of the libraries could be easily
modified to take the simpler and more flexible approach of being pointed at existing markup.
Thus, the script that evolved into Scatter was created, with a focus on providing an
easy-to-integrate and configurable scattering effect with a clean core script - i.e.
following the philosophy of doing one thing well, and also making it easy for others
to understand (and extend if needed).
Scatter does not convert JSON to HTML for you - that's a distinct task from randomly
arranging HTML elements - but it will work whether your HTML is static or dynamic,
and it does not limit you to images styled as polaroids.
The versatility is demonstrated within the Scatter documentation, where a handful of
examples show how it can be used to achieve vastly different effects.
Scatter does not require any external libraries, it's a single ~12KB file (~3KB compressed)
and will run in any browser released in the past five years (earlier browsers will work with
appropriate polyfills, available either from MDN or backwards compatibility libraries).
If you find any issues, or you have a need that Scatter almost-but-not-quite meets, feel free
to either raise an issue or get in touch directly to discuss further.
The Jetty documentation for Configuring SSL/TLS is long and
daunting, but makes no mention of how to work with the EFF's Let's Encrypt
certificate authority, which provides free automated certificates with the aim
of having the entire web available over HTTPS.
This article provides the steps for obtaining a Let's Encrypt certificate,
importing it into Jetty, enabling HTTPS using the certificate, and handling
renewals.
It assumes you have Jetty setup in a home/base configuration, serving over HTTP
for one or more Internet-facing domain names.
As with all such guides, it is recommended to read all steps before making any changes,
and ensure you have backups for any existing files you may modify.
Continue.
There are various situations where one might want to know the full URL sent
over HTTP by the user agent, before any rewriting has occurring.
Depending on the situation and setup, it can be as simple as using CGI variables
such as path_info
, redirect_url
or request_uri
, and within a JVM servlet
getRequestUrl()
may prove useful - but none of those are guaranteed to be
the URL which Apache received, nor are any of Apache's other documented variables.
Fortunately there is a workaround, because one variable provided is the
first line of the HTTP request, which contains the desired request URL nestled
between the method and protocol, i.e: "GET /url HTTP/1.1
" - meaning all that
needs doing is to chop the ends off.
It is relatively simple to extract the URL, and at the same time provide it to
later scripts, by using the RequestHeader directive from mod_headers to
set and modify a header, like so:
RequestHeader set X-Original-URL "expr=%{THE_REQUEST}"
RequestHeader edit* X-Original-URL ^[A-Z]+\s|\sHTTP/1\.\d$ ""
The first line creates a header named X-Original-URL
with the full value of
the variable.
The second line performs a regex replace on the specified header, matching
both the request method and its trailing space (^[A-Z]+\s
) then
the protocol plus its preceding space (\sHTTP/1\.\d$
) and
replacing with an empty string to leave just the URL.
The *
after edit is what makes the replace occur multiple times - without it
only the first match would be replaced. (i.e. the *
is equivalent to a g
/Global flag.)
The name X-Original-URL
is used for compatibility with the equivalent
header set by the IIS URL Rewrite Module - both that module and the above
solution provide the full request URL, including query string, and encoded in
whatever manner the user agent sent, but one difference is that the above
config always sets the header, whilst the IIS version only sets it when the URL
has been rewritten.
Test-Driven Development - TDD - is a great way to catch bugs before they go live,
to ensure fixes stay fixed, and to prove the functionality of software. This
stability alone is reason enough to ensure tests exist for as much of every
application as is feasible, but is not the only benefit.
When you write the all the key tests for a piece of functionality in advance
of writing the rest of the code, you are defining when that functionality
will be complete, how much progress has been made, and specifically what tasks
are next to be done.
For anyone practising Agile development, the parallels should be blatant. There
is similarity in the benefits too, particularly with respect to keeping focused
on the task being worked on and knowing when you're done and ready to move on.
With Agile development, each piece of functionality is broken down in a couple
of ways - as business-focused acceptance criteria, and development-focused tasks,
which is (in part) about making large pieces of work less daunting. Likewise,
TDD provides a way for an otherwise overwhelming piece of work to be described
step-by-step by someone who understands the matter at hand, and (with appropriate
software structure and functionality) can then be divided up amongst multiple
developers to increase overall velocity.
Incomplete functionality can be more easily passed from one developer to another,
simplifying the explanations of what has been done and still needs to be done;
the tests do the talking.
TDD means all newly written code has tests to describe what it should do, which
helps to build trust in the application - a full set of passing tests proves a
particular change has not altered the tested functionality, and allows you to
deploy it with far more confidence than otherwise.
TDD means writing the tests happens first - before making the changes that the
tests apply to, not afterwards. Writing tests afterwards still has advantages
compared to no tests at all, but it does not guarantee that all code has tests
- indeed, it makes it more likely that tests will be rushed or dropped if time
is limited; precisely the situation when bugs are more likely to be introduced
- and thus can give false confidence in what the result of a set of changes may
be.
Test-Driven Development increases productivity. As with most things, TDD has a
learning curve - it's not something a programmer will instantly be completely
efficient with - but some programmers will still complain (erroneously) that
writing automated tests slows them down.
It's obvious that writing tests is additional work compared to not doing so, but
this additional upfront cost comes with the saving of not having to manually
step through performing repetitive tasks with minor variations each time - each
test is written once, and is executed easily each time the code progresses. When
automated, tests are not forgotton - the full test suite can be run at any time
to check for regressions or unexpected side-effects. Automated testing doesn't
remove the need to use an application as a user, but it far reduces the time
it takes to check that changes are working as intended, and helps reduce the
space for human error.
Of course, practising TDD does mean that pesky bugs are identified at the time
of development - not days, weeks, or months down the line - and this can also
appear to make things slower; in reality identifying bugs sooner generally makes
the issue easier to fix.
Finally, with TDD you know when you're done - when all the tests pass. Without
tests written up-front, there may not be a clear finishing point. It's easy for
developers to become side-tracked, or to unconciously introduce scope creep, and
thus slow down the completion of a task which an appropriate set of test cases
would have identified as already implemented.
This article hopefully explains how Test-Driven Development is not just about
code quality, reducing application errors and increasing stability. It enables
developers to work smarter and faster, give clearer progress updates, know what
they're doing next and not become disheartened on bigger chunks of work.
TDD is a critical methodology which any competent professional developer should
want to use.