Stand With Ukraine. Stop Putin. Stop War.

You've probably heard of UUIDs as a way to have unique identifiers for objects. But have you heard of ULIDs?

I sure didn't until @PhilSturgeon tweeted about it, but since then I've taken a closer look and just finished implementing them into an xPDO project that was using UUIDs until now.

The basic premise is that, just like UUIDs, you can create the ULID without needing to know what the last one was (which is different from the standard primary auto-increment ID mostly used in xPDO projects). They're guaranteed to be unique, at least up to massive number of generations per millisecond. If the project needs to scale across multiple servers, or if you use it for request logging, that's a big plus. 

Comparing ULID vs UUID is more subjective, but I like that you get simpler and shorter identifiers. While a UUID would look something like 05c337c3-d2b3-4a50-a8b1-90e4fae23cfc, a ULID looks like 01edksqtx9cfzzt1y9sm57h3yq. It's still random gibberish, but it's cleaner and ready to be used in URLs.

If I understand it correctly, the first part of the ULID is based on the timestamp and will actually sort (roughly) by the time the ULID was generated, which is also a useful feat and may help with insert performance in databases.

To incorporate this into an xPDO project, you'll need to replace your xPDOSimpleObject usage with a custom base object and implement some logic related to the primary key.

For this example, I'm using robinvdvleuten/php-ulid. If you're following along, install that into your project with composer and make sure your autoloader has been loaded.

In your xPDO XML Schema, define a base object that all your other objects will extend from. Make note that this should extend xPDOObject (and not xPDOSimpleObject), and define the field to hold the primary key.

The examples in this article are based on xPDO 3, for use with xPDO 2 remove the namespaces (and perhaps add a custom prefix for your project to avoid conflicts) and it ought to work just the same.

<?xml version="1.0" encoding="UTF-8"?>
<model package="YourNamespace\Model\" baseClass="YourNamespace\Model\BaseObject" platform="mysql" defaultEngine="InnoDB" version="1.1">
    <object class="YourNamespace\Model\BaseObject" extends="xPDO\Om\xPDOObject" inherit="single">
        <field key="ulid" dbtype="varchar" precision="52" phptype="string" null="false" />

        <index alias="ulid" name="ulid" primary="true" unique="true" type="BTREE">
            <column key="ulid" length="" collation="A" null="false" />
        </index>
    </object>
    
    
    <object class="SomeObject" table="some_object">
        ...
    </object>
</model>

As a ULID is encoded as a 26 character string, you can get away with lowering the precision to 26 but I like to have a little bit of padding in case things change down the road. 

Because I'm defining the baseClass in the model, I've skipped providing the extends attribute on the object. 

Also note that we're defining the index to be primary and unique.

Now build the model classes using a build script or for xPDO 3 the parse-schema command:

vendor/bin/xpdo parse-schema mysql path/to/your/project.mysql.schema.xml src/ -v --update=1 --psr4=YourNamespace

With the model files generated, edit your BaseObject class which should be in src/Model/BaseObject.php or model/yourproject/baseobject.class.php for xPDO 2.

What we're going to do is overriding the save() method to set the primary key for new objects with a freshly generated ULID.

(Again, note this is an xPDO 3 example. If using xPDO 2, remove namespaces and it should work the same.)

<?php
namespace YourNamespace\Model;

use Ulid\Ulid;
use xPDO\Om\xPDOObject;
use xPDO\xPDO;

class BaseObject extends xPDOObject
{
    public function save($cacheFlag = null)
    {
        if ($this->isNew()) {
            $this->set($this->getPK(), Ulid::generate(true));
        }
        return parent::save($cacheFlag);
    }
}

Pretty simple, huh? In this example I'm passing true to the generate method because I prefer the key to be lowercase. 

Now when you use the xPDO APIs to create new objects, it'll automatically add the ULID. And this will also work with your foreign keys and relations - just make sure to use a varchar field that can hold the ULID rather than an int like you may be used to with autoincrementing keys.

When retrieving objects note that you should use the array syntax on $xpdo->getObject to specify the key. xPDO might just be smart enough to handle it as we've defined the primary key, but for security reasons you should never pass arbitrary user data as a string into the second parameter of getObject

<?php

$ulid = (string)$_GET['ulid']; // or whereever you're getting this from, like a routing component
$object = $xpdo->getObject(\YourNamespace\Model\SomeObject::class, ['ulid' => $ulid]);
if ($object) {
    var_dump($object->toArray());
}
else {
    echo 'Doesn\'t exist';
}

Enjoy.

For a recent extra, I needed to get some arbitrary data into a package, in such a way that it's available for both the setup.options.php and the resolver - without duplicating that data. Specifically, it's a big array containing definitions for a theme with assets and elements that needed to be manually created/updated only if the user choose to do so.

After some time, I found a way to do that using the package attributes. And in this article I'll show you how to add that to a standard build.

Define the data

First, define the data. I created the file _build/data/theme.inc.php to return an array, but you can place it where it makes most sense. The file will only be accessed when building the package, so does not have to be in the core or assets folder (although it could be, if that makes sense for your use case).

<?php

$def = [
    // a whole bunch of data and elements
];

return $def;

Add the data to the package attributes

The package attributes is a special part of the package, which gets stored in the package manifest rather than a vehicle. It's used to hold some standard information: package name, changelog, readme, and license, among others.

In a standard build script the code to set the package attributes looks something like this:

<?php
// ... lots of other code ...
$builder->setPackageAttributes([
    'license' => file_get_contents($sources['docs'] . 'license.txt'),
    'readme' => file_get_contents($sources['docs'] . 'readme.txt'),
    'changelog' => file_get_contents($sources['docs'] . 'changelog.txt'),
    'setup-options' => [
        'source' => $sources['build'] . 'setup.options.php',
    ],
]);

It turns out though - package attributes are not limited to those standard items. Any attribute will be stored into the package manifest.

Let's take advantage of that by adding our own attribute containing (in this case) a theme-definition from our file:

<?php
$builder->setPackageAttributes([
    'license' => file_get_contents($sources['docs'] . 'license.txt'),
    'readme' => file_get_contents($sources['docs'] . 'readme.txt'),
    'changelog' => file_get_contents($sources['docs'] . 'changelog.txt'),
    'setup-options' => [
        'source' => $sources['build'] . 'setup.options.php',
    ],
    'theme-definition' => json_encode(include __DIR__ . '/data/theme.inc.php'),
]);

As the theme definition returns an array, we're simply include-ing it. I decided to encode it as JSON, but I don't think you have to do that - the package manifest is serialised so should also support arbitrary arrays.

If you were to build a package at this point, that would include the theme-definition, but it's not being used yet.

Accessing package attributes in setup.options.php

In the _build/setup.options.php file, which is used to build the interface for the setup options shown when installing a package, the package attributes are available in $options['attributes'].

For example, to retrieve the theme-definition, the code would look like this:

<?php
$def = array_key_exists('theme-definition', $options['attributes']) 
    ? json_decode($options['attributes']['theme-definition'], true)
    : [];

if (empty($def) || !is_array($def)) {
    return 'Failed to load theme definition: ' . json_encode($options, JSON_PRETTY_PRINT);
}

foreach ($def as $definition) {
    // ... render the option ...
}

Now you can build a dynamic interface based on your data definition. We return an error to the setup options panel if we can't find the attribute.

Access data in resolvers

Building the interface is step one - accessing the same information in a resolver is step two.

In resolvers, the package attributes are in $options.

<?php

$def = array_key_exists('theme-definition', $options) 
    ? json_decode($options['theme-definition'], true) 
    : [];

if (empty($def) || !is_array($def)) {
    $modx->log(modX::LOG_LEVEL_ERROR, 'Failed to load theme definition');
    return false;
}

The selected values in the setup options window are also available in $options. So if you created a setup option named "create_template", you can check that like so:

<?php

$def = array_key_exists('theme-definition', $options) 
    ? json_decode($options['theme-definition'], true) 
    : [];

if (empty($def) || !is_array($def)) {
    $modx->log(modX::LOG_LEVEL_ERROR, 'Failed to load theme definition');
    return false;
}

if (array_key_exists('create_template', $options) && $options['create_template']) {
    foreach ($def['templates'] as $template) {
        // ... create the template or something ...
    }
}

Especially for use cases like themes, or where you have some dynamic data you want to manually create/update in a resolver instead of as a vehicle, this can be a useful technique to have under your belt.

For SiteDash I built a worker queue, based on MySQL, to handle processing tasks asynchronously. There's a central database and separate worker servers inside the same private network that poll for new tasks to execute. These worker servers run PHP, using xPDO 3, to perform the tasks the application server has scheduled.

One problem that would occasionally pop up is that the worker servers would lose connection with the database. The database is on a different server in the network, so that could come from rebooting the database server, a deploy or backup causing high load, network glitch, or just.. gremlins.

Obviously, the worker servers need to talk to the database to be useful, so I started looking at a way to 1) detect the connection was dropped and 2) automatically reconnect if that happens. It turns out to be fairly straightforward (once you know how!).

First, I implemented a check to see if the connection is alive. It does that by checking if a prepared statement (query) could be prepared.

<?php
while (true) {
    $q = 'query that is irrelevant here';
    $stmt = $xpdo->query($q);
    if ($stmt) {
        $stmt->execute();
    }
    else {
        reconnect();
    }
    // Execute task, if any
    sleep(1);
}

function reconnect() {
    global $xpdo;
    $xpdo->connection->pdo = null;
    return $xpdo->connect(null, array(xPDO::OPT_CONN_MUTABLE => true));
}

The workers run in an infinite loop, one loop per second, so this check happens every second. When the statement can't be prepared it's treated as a dropped connection, and we call the reconnect method to restore the connection.

The reconnect happens by unsetting the PDO instance on the xPDOConnection instance. Without that, xPDO thinks it still has a connection, and will continue to fail. Because we don't unset the xPDOConnection instance, we can just call $xpdo->connect() without providing the database connection details again.

With this check in place, the loop can still get stuck in a useless state if there's a reason it can't reconnect. That can have some unintended side effects and makes it harder to detect a problem that needs manual interference, so I also implemented another check.

Every 10 loops, another query is sent to the database with a specific expected response; a simple SELECT <string>. The idea is the same as the check above, see if the statement can't be prepared or doesn't return the expected result, and if so, do something.

Here's what that roughly looks like:

<?php
$wid = 'Worker1';
$loops = 0;
while (true) {
    $loops++;
    
    $q = 'query that is irrelevant here';
    $stmt = $xpdo->query($q);
    if ($stmt) {
        $stmt->execute();
    }
    else {
        reconnect();
    }
    // Execute task, if any
    sleep(1);
    
    // Every 10 loops, check if the connection is alive
    if (($loops % 10) === 0) {
        $alive = $xpdo->query('SELECT ' . $xpdo->quote($wid));
        if (!$alive || $wid !== $alive->fetchColumn()) {
            break;
        }
    }
}

function reconnect() {
    global $xpdo;
    $xpdo->connection->pdo = null;
    return $xpdo->connect(null, array(xPDO::OPT_CONN_MUTABLE => true));
}

In this case, we're not calling the reconnect() method. Instead, we're breaking out of the loop. This way the PHP process can end gracefully, instead of pretending to be churning along properly. When the process ends, supervisord is used to automatically restart it. When a new process is unable of connecting, the logs and monitoring get a lot louder than when a worker silently keeps running, so this system is working nicely.

Now, this obviously isn't the entire worker code for SiteDash. Over time it has grown into 300 lines (not counting the tasks themselves) of worker logging, automatic restarting when a deployment happened, analytics, ability to gracefully kill a process, and dealing with unexpected scenarios like a database connection getting dropped.

Overall this system has managed to keep the processes running quite nicely. There were some issues where certain tasks would cause a worker to get stuck, which have now been resolved, and currently the biggest limiting factor for the worker uptime is deployments. The workers need to restart after a deployment to make sure there is no old code in memory, and I have been fairly busy with adding features to SiteDash (like remote MODX upgrades last week!).

It's also been fun and interesting to try to get a good insight into these background processes and tweaking the monitoring to notify about unexpected events, without triggering too many false negatives. A challenge I'd like to work on in the future is automatically scaling the number of workers if the queue goes over a certain threshold, but for now I can manually launch a couple of extra processes quite quickly if things take too long.

Some fun numbers:

  • Overall, since launching workers as indefinitely running processes, the average worker process was alive for 12,5 hours
  • Since fixing the last known glitch where a process could get stuck executing certain tasks, on October 29th, the average worker stayed online for 2 days, 12 hours and 52 minutes.
  • The longest running workers started on November 1st and stayed up for 12 days, 5 hours and 40 minutes before being restarted due to a deployment.

Sometimes you need to define multiple criteria in xPDO schemas to make sure your relations are properly defined. I came across this while working on a project that needed some existing databases integrated. This specific case had one "fl_translations" table that contained the translations for thousands of objects. To get the translations for a specific object, you would need to filter that table on the "originaltable" field. For example, one row might have originaltable=fl_colors and contains the translation for the color in a specific language.

In order to properly define this relation, I went browsing the MODX core schema for some ideas, and I stumbled across a pretty much undocumented feature of xPDO Schemas: relation criteria!

Here's a snippet of the final schema:

Basically, while with a normal relation you would immediately close the <composite> (or <aggregate>) element, but if you want additional filters (criteria) on the relation, you can define that. A simple <criteria> object with a target (I presume but haven't tested that you can add an additional <criteria> object with a target of local), and inside it a JSON object with the fieldname (of the foreign object) and the value it should have (in this case, fl_colors, as that's how it was set up).

This way of defining relations is very powerful and I imagine some really complicated relations could be defined once (in the schema) and reused without knowing all the details by simply calling getMany('RelationAlias').

The core uses this type of relationship for defining the PropertySets relations on elements and also in the modAccess definition. For example, here is a snippet from the core schema for modChunk:

xPDO is quite nifty eh?

Just a quick post here for something that I see asked (or, more accurately, see being done in the wrong way) way too often.

When you're building your own components or data model using xPDO Schemas, you will probably be working with that data in processors, snippets or external applications as well. At some point you will have related tables, and you will love coming across methods like addOne and addMany - they're awesome!

There's one big caveat when using them, and that is to figure out which one to use.

Oh, you use addOne when you want to add *one* related object, and addMany when you want to add multiple in one go? WRONG. Well, okay - there's a small truth in there. But it is not related to the fact if you're in the process of adding only one related object or an array of objects. If you are using the wrong method you can often see this by the lack of relation being added to the extra object(s), like the relation is not "sticking" for some reason.

So do I use addOne or addMany?

If you've read the Defining Relationships documentation page, which is a great introduction to aggregate and composite relations with xPDO schemas, you should have noticed this thing called the cardinality. This little thing is very important and the key to figuring out when you need addOne, and when you need addMany.

You see, if the cardinality is one, you will need to use addOne. If the cardinality is many, you will need to use addMany. And that's all folks.

That's all folks!

Teaser image by Sepehr Ehsani.

As of MODX Revolution 2.2, developers are handed class-based processors to speed up development of the back-end components. These are great, and I have blogged about Class Based Processors in general before with some quick examples, but in this article we'll dive into a particularly awesome one: modObjectGetListProcessor.

The modObjectGetList processor is mostly used for populating grids through the modExt Grid implementation, but you could also use it for any other widget that uses a JSON Data Store. And processors aren't limited to being used by connectors for back-end components.. they're also great to keep your code DRY (Don't Repeat Yourself) for use in Snippets!

For this article we'll assume a simple grid though. The techniques displayed can be used to point you in the right direction for other implementations.

The Basics

Here's basically the minimum processor file you can use:

.. and the reason they're so awesome. Brief, super awesome, working code!

The most important thing to note is the public variable $classKey, this is the class name of the object you are going to retrieve. Furthermore you'll see we define the $defaultSortField to the "initiatedon" date field from the schema, and with $defaultSortDirection we make sure we get the latest on top. The $objectType variable is not necessarily required, but allows you to use prefix (lexicon) error messages for some default sanity checks. For example in the update processor it will use the objectType to prefix _err_ns if the primary key is not specified.

We also make sure we return the name of our extended class in the end, as that is used to instantiate the processor when it's called. While you're free to name it whatever you want, I'd advise you to keep it the same as your classKey. That way, when adding a new processor, you can just copy/paste another one and find/replace the old classKey for the new one and you're good to go.

Exploring the Processor Process

Just like an earlier post, dealing with the modObjectUpdateProcessor class based processor, I have created a list of what happens in the processor that you will find below.

  1. Processor instantiated, properties being set.
  2. Using checkPermissions() the processor decides if the user is allowed to access it.
  3. The processor finds lexicon topics to load via getLanguageTopics, which gets its data from the languageTopics variable (as an array).
  4. initialize() is called on the processor, which sets a number of default properties including sort to the defaultSortField (default: name) class variable, and the direction to the defaultSortDirection variable (default: ASC).
  5. process() is called.
  6. beforeQuery() is triggered by process(), and if the result is not a boolean TRUE it will consider the processor to have failed and cancel further execution.
  7. getData() is triggered by process().
  8. The getData() method builds an xPDOQuery object for the classKey type.
  9. The getData() method calls prepareQueryBeforeCount(xPDOQuery $c) allowing you to add additional conditions to the queries. After calling that, it fetches the total amount of results using modX.getCount.
  10. prepareQueryAfterCount(xPDOQuery $c) is called by getData().
  11. The query is sorted with help of the getSortClassKey() method, and the sortAlias, sort and dir properties.
  12. If the limit property is larger than 0 it limits the query and sets an offset.
  13. modX.getCollection is called with your data, it's been retrieved.
  14. Every row is iterated over using the iterate(array $data) method. iterate calls beforeIteration(array $list), and starts looping over the rows.
  15. If the checkListPermission variable is true, the object extends modAccessibleObject and checkPolicy('list') is false, it skips the row.
  16. prepareRow(xPDOObject|modAccessibleObject $object) is called which needs to return an array with the objects' fields. Great method to customize the retrieved data. The array is added to the list.
  17. After iteration over the entire result set afterIteration(array $list) is called.
  18. The data is returned.

Example Usages

Defining Constraints (where field X has value Y)

When adding constraints, we will take our minimum processor and add (actually override) a new function called prepareQueryBeforeCount. This function takes in the xPDOQuery object as parameter, and expects it to be returned as well.

Easy enough we first get the "reservation" value using $this->getProperty(). By specifying a second value we are assigned a default instead of NULL. In this case I'm setting the default value to zero, which makes sure that if there is no reservation passed, it will not return any results - but no results instead (as all rows have a reservation set to > 0).

After getting the reservation variable, we just interact with the xPDOQuery $c as we would in a normal processor (or script) and pass our where condition.

In the end we return the xPDOQuery (this is important!) and we've limited our query to just that reservation.

Modifying the way row data is returned

In some cases, your database set up may not completely match how you want to display that data in the front end. For example, you may have an array (which is stored serialized), which you want returned as one line of text per array key=>value, for rending in a textarea for example.

You will also see that instead of calling simply $object->toArray(), I am passing some additional parameters.

Specifically selecting fields

You could also join tables in the prepareQueryBeforeCount processor, add additional constraints etc.

Are there any more examples you would like to see, or have some to share yourself? Let me know in the comments!