Stand With Ukraine. Stop Putin. Stop War.

Hello! Welcome to my humble web presence. I'm Mark Hamstra, 32 years young, wearer of many hats at modmore, resident of Leeuwarden, the Netherlands. Most of my time is spent building and maintaining awesome extras and tools at modmore, but I also love gaming, cooking, and my dogs. More about me.

This site is where I share thoughts, cool projects and other oddities related to MODX, xPDO and ExtJS. I also write about MODX regularly over at MODX.today. Sometimes I post three blogs in a week, sometimes there's nothing new here in a year. Read a random article.


After countless hours of copying and updating old content, I switched over the modmore support site to a new system last weekend. While the old one was running in MODX (FaqMan plus some custom routing), the new support site is powered by HelpScout Docs.

Docs is integrated with the HelpScout helpdesk we've provided support with for many years, so using that to host the FAQs makes sense. It allows the help widget ("Beacon") on the site to search through the FAQs, and also gives us quick access to all FAQs when responding to customer emails to point them in the right direction. 

Unfortunately shortly after making the switch, I ran into 2 compounding issues.

Problem 1: the built-in 404 page just plain sucks. On top of lacking branding, it does not offer the user any way to continue to find what they're looking for with a search or basic navigation. No search, no navigation, not even a link to the homepage.

HelpScout Docs' very plain page-not-found error

Problem 2: it would not accept a bunch of redirects. While Docs supports adding redirects through the back-end interface, it would not accept original URLs with a "+" or "(" or ")" in the path. While these are valid characters in URLs (whether plainly or percent encoded), Docs did not agree.

Unfortunately about half or the URLs on the original support site used some special character. 

Which means problem 1 + problem 2 = half of the users following existing links to the FAQs, end up on a 404 page with no clue where to look next. Ouch.

While the HelpScout team confirmed the problems, they also indicated they would not be fixed until a future big revamp, so I've had to figure out a way to deal with this. 

  • There's a good API for Docs which I could use to build a custom support site again, while feeding off the data stored with HelpScout, but the point of moving to Docs was so I didn't have to maintain or host the support site.
  • I could flip back the DNS to the old support site, but that would actually make the situation worse: I'd just gone through all the content and rewritten parts of it, which was only in Docs. I'd need to spend even more time copying that back into MODX if I were to go this route.

Finally I had a random thought while walking the dog: the entire modmore site is behind CloudFlare. CloudFlare has this thing called CloudFlare Workers, which interacts with a request "on the edge" and can be used to rewrite responses on the fly with JavaScript.

With CloudFlare Workers, I could fix both problems: apply the missing redirects, and present a different 404 page.

I had never used Workers before, but the gist is pretty easy: register a JavaScript function that takes in the fetch request, do stuff with it before and/or after it hits the origin server, and return a response. The function then gets applied to a route (in this case, the entire support subdomain) to make the magic happen.

In less than an hour I adapted the official example for applying bulk redirects, and this tutorial by Mickael Vieira for custom 404 pages with Workers, to fix both problems. Plus the free Workers tier is more than enough for our traffic, so I'm more than pleased with this quick solution.

The final worker code (with some minor tweaks) is included below if anyone has similar needs and is looking for inspiration:

// Redirects we can't configure for HelpScout docs from old path > new URL
const redirectMap = new Map([
  ["/faq/14-development+licenses", "https://support.modmore.com/category/119-free-development-licenses"],
  // ... add more redirects here ...
]);

// This makes us interact with the request
addEventListener("fetch", event => {
  event.respondWith(handleRequest(event.request));
});

// The magic function that handles the request to apply our redirects and 404 handling.
async function handleRequest(request) {
  const url = new URL(request.url);
  const path = url.pathname;

  // Check if we have a redirect on-file we need to process before handing off the request to the origin
  const redirect = redirectMap.get(path);
  if (redirect) {
    return Response.redirect(redirect, 301);
  }

  // Send the request to the origin (helpscout) server
  const response = await fetch(url.toString(), request);

  // If we reach a html 404 page, replace it with our own 
  if (response.status === 404 && isHTMLContentTypeAccepted(request)) {
    return fetchAndStreamNotFoundPage(response);
  }

  // Return the origin response
  return response;
}

// Function that streams a 404 page 
// See https://www.mickaelvieira.com/blog/2020/01/27/custom-404-page-with-cloudflare-workers.html
async function fetchAndStreamNotFoundPage(resp) {
  const { status, statusText } = resp;
  const { readable, writable } = new TransformStream();

  // This could in the future point to a dedicated 404 page, but for now I'm okay with showing the homepage
  const response = await fetch("https://support.modmore.com/?err=404");
  const { headers } = response;

  response.body.pipeTo(writable);

  return new Response(readable, {
    status,
    statusText,
    headers
  });
}

// Makes sure only text/html requests are handled. 
function isHTMLContentTypeAccepted(request) {
  const acceptHeader = request.headers.get("Accept");
  return (
    typeof acceptHeader === "string" && acceptHeader.indexOf("text/html") >= 0
  );
}

I'm still a big fan of HelpScout's helpdesk solution. Over the years I've gotten to know them as very detail- and customer-oriented in their services, so hopefully the big Docs revamp they're planning brings that attitude to the hosted knowledge base as well.

Until that happens, I'm very happy that CloudFlare Workers saved the day and helped me avoid having to do a lot of extra work!

You've probably heard of UUIDs as a way to have unique identifiers for objects. But have you heard of ULIDs?

I sure didn't until @PhilSturgeon tweeted about it, but since then I've taken a closer look and just finished implementing them into an xPDO project that was using UUIDs until now.

The basic premise is that, just like UUIDs, you can create the ULID without needing to know what the last one was (which is different from the standard primary auto-increment ID mostly used in xPDO projects). They're guaranteed to be unique, at least up to massive number of generations per millisecond. If the project needs to scale across multiple servers, or if you use it for request logging, that's a big plus. 

Comparing ULID vs UUID is more subjective, but I like that you get simpler and shorter identifiers. While a UUID would look something like 05c337c3-d2b3-4a50-a8b1-90e4fae23cfc, a ULID looks like 01edksqtx9cfzzt1y9sm57h3yq. It's still random gibberish, but it's cleaner and ready to be used in URLs.

If I understand it correctly, the first part of the ULID is based on the timestamp and will actually sort (roughly) by the time the ULID was generated, which is also a useful feat and may help with insert performance in databases.

To incorporate this into an xPDO project, you'll need to replace your xPDOSimpleObject usage with a custom base object and implement some logic related to the primary key.

For this example, I'm using robinvdvleuten/php-ulid. If you're following along, install that into your project with composer and make sure your autoloader has been loaded.

In your xPDO XML Schema, define a base object that all your other objects will extend from. Make note that this should extend xPDOObject (and not xPDOSimpleObject), and define the field to hold the primary key.

The examples in this article are based on xPDO 3, for use with xPDO 2 remove the namespaces (and perhaps add a custom prefix for your project to avoid conflicts) and it ought to work just the same.

<?xml version="1.0" encoding="UTF-8"?>
<model package="YourNamespace\Model\" baseClass="YourNamespace\Model\BaseObject" platform="mysql" defaultEngine="InnoDB" version="1.1">
    <object class="YourNamespace\Model\BaseObject" extends="xPDO\Om\xPDOObject" inherit="single">
        <field key="ulid" dbtype="varchar" precision="52" phptype="string" null="false" />

        <index alias="ulid" name="ulid" primary="true" unique="true" type="BTREE">
            <column key="ulid" length="" collation="A" null="false" />
        </index>
    </object>
    
    
    <object class="SomeObject" table="some_object">
        ...
    </object>
</model>

As a ULID is encoded as a 26 character string, you can get away with lowering the precision to 26 but I like to have a little bit of padding in case things change down the road. 

Because I'm defining the baseClass in the model, I've skipped providing the extends attribute on the object. 

Also note that we're defining the index to be primary and unique.

Now build the model classes using a build script or for xPDO 3 the parse-schema command:

vendor/bin/xpdo parse-schema mysql path/to/your/project.mysql.schema.xml src/ -v --update=1 --psr4=YourNamespace

With the model files generated, edit your BaseObject class which should be in src/Model/BaseObject.php or model/yourproject/baseobject.class.php for xPDO 2.

What we're going to do is overriding the save() method to set the primary key for new objects with a freshly generated ULID.

(Again, note this is an xPDO 3 example. If using xPDO 2, remove namespaces and it should work the same.)

<?php
namespace YourNamespace\Model;

use Ulid\Ulid;
use xPDO\Om\xPDOObject;
use xPDO\xPDO;

class BaseObject extends xPDOObject
{
    public function save($cacheFlag = null)
    {
        if ($this->isNew()) {
            $this->set($this->getPK(), Ulid::generate(true));
        }
        return parent::save($cacheFlag);
    }
}

Pretty simple, huh? In this example I'm passing true to the generate method because I prefer the key to be lowercase. 

Now when you use the xPDO APIs to create new objects, it'll automatically add the ULID. And this will also work with your foreign keys and relations - just make sure to use a varchar field that can hold the ULID rather than an int like you may be used to with autoincrementing keys.

When retrieving objects note that you should use the array syntax on $xpdo->getObject to specify the key. xPDO might just be smart enough to handle it as we've defined the primary key, but for security reasons you should never pass arbitrary user data as a string into the second parameter of getObject

<?php

$ulid = (string)$_GET['ulid']; // or whereever you're getting this from, like a routing component
$object = $xpdo->getObject(\YourNamespace\Model\SomeObject::class, ['ulid' => $ulid]);
if ($object) {
    var_dump($object->toArray());
}
else {
    echo 'Doesn\'t exist';
}

Enjoy.

The Imagine PHP Library is a very useful tool for image manipulations. I'm using it on a variety of projects, including projects where users upload images directly from mobile devices, which are often rotated.

The actual rotation is stored in the EXIF data, but that's not automatically handled.

It turns out that implementing that with Imagine is really easy though, thanks to the Autorotate filter. 

Here's a functional example of taking an image, rotating it if needed, making sure it's at most 1920x1080px in size while keeping the aspect ratio, and finally stripping out the metadata.

<?php

// Create an Imagine instance - using Imagick if available otherwise falling down to GD
try {
    $imagine = new \Imagine\Imagick\Imagine();
} catch (\Imagine\Exception\RuntimeException $e) {
    $imagine = new \Imagine\Gd\Imagine();
}

// Use the Exif metadata reader to be able of accessing the orientation
$imagine->setMetadataReader(new \Imagine\Image\Metadata\ExifMetadataReader());

// Load the image into memory - this could also use $imagine->read or $imagine->open 
$img = $imagine->load('.. raw image data ..');

// Use the autorotate filter to rotate the image if needed
$filter = new \Imagine\Filter\Basic\Autorotate();
$filter->apply($img);

// Resize down to max 1920x1080px while keeping aspect ratio
$img = $img->thumbnail(
    new \Imagine\Image\Box(1920, 1920),
    \Imagine\Image\ManipulatorInterface::THUMBNAIL_INSET
);

// Strip off any metadata embedded in the image to save space and privacy
$img->strip();

// Render the binary data to store (or use $img->save() or $img->show())
$binary = $img->get('jpg');

When did you last check the size of your modx_session database table? Was it huge, like gigabytes huge? If so, you're not alone. 

To understand the problem, you need a little bit of background.

How do sessions work in PHP?

Typically, the standard PHP session handler will create a session when requested, and store it as a simple file somewhere on the system. The path it writes session to is configured with the session.save_path configuration in the php.ini, and can point anywhere. When that option is empty, it writes to the temp directory on the server. 

Creating and loading sessions is simple enough, but the next thing the session handler does is clean up sessions. This is called garbage control, or gc. This removes sessions beyond their expiration time, to make sure it doesn't keep growing indefinitely and takes up your vital disk space. 

Garbage control doesn't have to run on every request. If your session/cookie life time is configured to a week and you're not too picky about the exact timing they are removed, then sessions only really need to be checked once a day. Cleaning up sessions can take a little time and resources, so PHP is by default configured to only do that once every 100 requests.  

How do sessions work in MODX?

MODX registers a custom session handler that takes care of storing, retrieving, and cleaning up sessions. It writes this to one of its own database tables (modx_session), rather than files. This allows MODX a little more control over the flow of sessions. 

It is also possible to instruct MODX to use standard PHP sessions, and there's also an extra available to write sessions to Redis. But the vast majority of sites will simply be using the default, writing to the database.

So, why does MODX not clean up its session table?

MODX awaits the signal from PHP that it's time to clean up sessions. This relies on 2 configuration options in PHP:

  • session.gc_probability
  • session.gc_divisor

You can find the official documentation for those options here.

Usually the probability is set to 1, and divisor to a value like 100 or 1000. That means that in approximately 1/100 requests, the garbage control will run. 

When MODX does not seem to be cleaning up its table, it's usually because of an attempt to improve the performance of the garbage collection, by bypassing PHP and off-loading it to a cron job that runs outside of the request/response cycle. 

Those environments assume PHP writes its sessions to a standard location in the filesystem, and clean up that directory based on the timestamp on the file. The session.gc_probability option is then set to 0, to tell PHP to never run its own session garbage collection.

That works great - if your sessions are written to the standard location. Which MODX doesn't. 

How common is this?

Based on data from SiteDash, which checks the size and status of your session table automatically, it's pretty common indeed. Out of a sample of 1727 sites, 27% seem to be affected by this.

How can I fix this?

Re-enable session.gc_probability. Set it to 1, and make sure session.gc_divisor is also set properly for your traffic.

Depending on your host and if you have access to a server control panel, you may be able of changing it yourself. In other cases, contact your host and ask them how it should be changed. 

For a recent extra, I needed to get some arbitrary data into a package, in such a way that it's available for both the setup.options.php and the resolver - without duplicating that data. Specifically, it's a big array containing definitions for a theme with assets and elements that needed to be manually created/updated only if the user choose to do so.

After some time, I found a way to do that using the package attributes. And in this article I'll show you how to add that to a standard build.

Define the data

First, define the data. I created the file _build/data/theme.inc.php to return an array, but you can place it where it makes most sense. The file will only be accessed when building the package, so does not have to be in the core or assets folder (although it could be, if that makes sense for your use case).

<?php

$def = [
    // a whole bunch of data and elements
];

return $def;

Add the data to the package attributes

The package attributes is a special part of the package, which gets stored in the package manifest rather than a vehicle. It's used to hold some standard information: package name, changelog, readme, and license, among others.

In a standard build script the code to set the package attributes looks something like this:

<?php
// ... lots of other code ...
$builder->setPackageAttributes([
    'license' => file_get_contents($sources['docs'] . 'license.txt'),
    'readme' => file_get_contents($sources['docs'] . 'readme.txt'),
    'changelog' => file_get_contents($sources['docs'] . 'changelog.txt'),
    'setup-options' => [
        'source' => $sources['build'] . 'setup.options.php',
    ],
]);

It turns out though - package attributes are not limited to those standard items. Any attribute will be stored into the package manifest.

Let's take advantage of that by adding our own attribute containing (in this case) a theme-definition from our file:

<?php
$builder->setPackageAttributes([
    'license' => file_get_contents($sources['docs'] . 'license.txt'),
    'readme' => file_get_contents($sources['docs'] . 'readme.txt'),
    'changelog' => file_get_contents($sources['docs'] . 'changelog.txt'),
    'setup-options' => [
        'source' => $sources['build'] . 'setup.options.php',
    ],
    'theme-definition' => json_encode(include __DIR__ . '/data/theme.inc.php'),
]);

As the theme definition returns an array, we're simply include-ing it. I decided to encode it as JSON, but I don't think you have to do that - the package manifest is serialised so should also support arbitrary arrays.

If you were to build a package at this point, that would include the theme-definition, but it's not being used yet.

Accessing package attributes in setup.options.php

In the _build/setup.options.php file, which is used to build the interface for the setup options shown when installing a package, the package attributes are available in $options['attributes'].

For example, to retrieve the theme-definition, the code would look like this:

<?php
$def = array_key_exists('theme-definition', $options['attributes']) 
    ? json_decode($options['attributes']['theme-definition'], true)
    : [];

if (empty($def) || !is_array($def)) {
    return 'Failed to load theme definition: ' . json_encode($options, JSON_PRETTY_PRINT);
}

foreach ($def as $definition) {
    // ... render the option ...
}

Now you can build a dynamic interface based on your data definition. We return an error to the setup options panel if we can't find the attribute.

Access data in resolvers

Building the interface is step one - accessing the same information in a resolver is step two.

In resolvers, the package attributes are in $options.

<?php

$def = array_key_exists('theme-definition', $options) 
    ? json_decode($options['theme-definition'], true) 
    : [];

if (empty($def) || !is_array($def)) {
    $modx->log(modX::LOG_LEVEL_ERROR, 'Failed to load theme definition');
    return false;
}

The selected values in the setup options window are also available in $options. So if you created a setup option named "create_template", you can check that like so:

<?php

$def = array_key_exists('theme-definition', $options) 
    ? json_decode($options['theme-definition'], true) 
    : [];

if (empty($def) || !is_array($def)) {
    $modx->log(modX::LOG_LEVEL_ERROR, 'Failed to load theme definition');
    return false;
}

if (array_key_exists('create_template', $options) && $options['create_template']) {
    foreach ($def['templates'] as $template) {
        // ... create the template or something ...
    }
}

Especially for use cases like themes, or where you have some dynamic data you want to manually create/update in a resolver instead of as a vehicle, this can be a useful technique to have under your belt.

Patreon is a community membership service that lets you pledge monthly donations, at a price you set yourself, to creators.

Back in December 2017 I first created a Patreon account to support Vasily "bezumkin" Naumkin with his work on the MODX core. Shortly after that I added pledges to the creators of PHPunit, FlySystem, and Wait But Why (which has nothing to do with programming, but is just one of my favourite blogs on the internet). At a later point in 2018 I also pledged to Homebrew.

They're all small amounts, more a token of my support and gratitude than anything else. The largest part was for Vasily ($25, until July), and the others were $3-5 each for a total of $38.

There are no tangible benefits, other than making sure that the software doesn't just go away by rewarding the creator. I routinely spend more money on stupid things I don't really need (like a huge foam enter button), while what these creators share with the world is much more valuable, so that's a really good deal.

They are also recurring, automatically charged monthly, which starts to add up over time.

In 2018 my personal Patreon donations totalled at $348. Still not enough to pay anyone's bills or full time employment, but probably more than I would've donated to these projects if they only accepted one-off donations, and the numbers do get more meaningful for the creator when you consider the effect of more people contributing. Previously these creators would fund their work through client work, sacrifice free time after a day job, rely on one-time donations, or juggle things in another way, while with Patreon they're offered a predictable (extra) income directly attributable to the work they share.


After a chat about the work that goes into my open source projects and in particular the work that goes into the MODX core, I decided to set up my own Patreon yesterday.

I am fortunate enough that I have a business that's running well and pays the bills, but I still constantly have to prioritise my time and energy. When the to do list explodes, or energy gets low (yay, burnout), I have to choose what things to work on. The reality of running a business is that things that make money are more important than things that don't, and as a result it's usually the open source work that gets snoozed first even though I strongly believe that the work I do on the MODX project should be an important part of my day-to-day work.

So that's where Patreon comes in.

By supporting me on Patreon, you're supporting my work for the MODX core and open source extras. Every contribution shows that these hours upon hours of work are worth something to you, and that will motivate me to keep up and make more time available to keep making the CMS you use better, one PR at a time.


If you're interested in supporting others in the MODX community, check out Joshua Lückers' Patreon, who is the most recent person to become a MODX integrator and has been putting in lots of hours as well.

I'm not aware of any other MODXers with a Patreon page at the moment (Vasily shut his down in July), but if you find any, or have any other Patreons you support, leave a comment below :)