The Real Challenge in Building an NFT Generator
What’s so difficult about creating 5,000 art pieces when it’s all done by code anyway? When it comes to discussions on NFT art, this is quite the common sentiment among casual observers. And as the technical artist behind the Big Bear Syndicate NFT collection, I must confess that I held these same thoughts at the start of this project.
In supporting the creation of our 5,000 bears, I’ve found that NFT art presents a unique set of challenges — one that requires a very holistic approach to problem-solving. Follow along for the development story of our tool, the PFP Generator!
NFT collections are usually comprised of PFPs (profile pictures) — character portraits that their owners may personally relate to, or simply find interesting or exciting. To reinforce the sense of ownership, each PFP also tends to be visually unique.
There’s the first problem. How do you create a large number of unique portraits without straining the life out of your art team? The most effective and scalable solution is to assemble them from individual body parts, allowing you to mix and match them into lots of unique combinations. And given the tedious nature of this task, it’s best to have the combining process automated through a tool.
Introducing: the PFP Generator, a tool I built for this purpose, as an extension of the 3D software Blender. To be clear, tools like this already exist out there, but we opted for an in-house solution for the flexibility to adapt to changing production needs. Additionally, we chose Blender as our rendering software even though it’s for 3D art — simply because we hadn’t decided at this stage whether to have our PFPs be in 2D or 3D, and Blender would be able to accommodate both future possibilities.
Now, onto the tool development process!
The Logic is the Easy Part
Yes, you read that right. I’d love to dive into a spiel about how much I strained my brain to develop an earth-shattering algorithm, but at its core, the logic for generating characters is simple.
For each category (Head, Face, Body, …):
- Pick one art asset in that category, and
- Hide all others from that category.
Here are the results of the first prototype, which I built in a matter of hours (maybe even minutes).
While nothing looks right here — the composition is way off, the lighting is weird, the characters are comically simple — well hey, the logic works! In another universe, we could’ve stopped here and called this our NFT collection. Fortunately, our artistic standards were higher than that.
And in the same vein, I could’ve stopped here and handed things over to the art team. All they would need to do was swap out my cube assets for their meticulously painted ones…and we’d have our PFPs, right? All 5,000 of them.
- Certainly the artists wouldn’t have trouble working with the tool, which didn’t have an interface at this point (you had to run code from within Blender).
- Nor would they struggle with waiting for 5,000 images to be re-rendered each time they made any changes to the art assets (each render took 8 seconds, so the whole collection would take 11 hours).
Can you see where this is going?
The Hard Part is…(drumroll please!)
With NFT collections, we’re dealing with a lot of art assets at once — and that means a lot of room for mistakes and oversights. A hairstyle intersects a specific jacket’s collar, an “eyebrows” file is wrongly named as an “eyes” file, and so on. Realistically, a PFP collection like this requires tweaking and re-rendering a hundred times over, and then some.
Given the above, there are two main obstacles that would make this process miserable for everyone:
- Poor efficiency: it takes an absurd amount of time to get the results you want.
- High dependency: only one person (i.e. the tech artist) understands how to run the tool and troubleshoot its errors.
Here’s how I tackled these issues.
Solving the Efficiency Problem
The first move is to minimize the amount of time wasted sitting around. When making small tweaks, the artists should be able to quickly get only the results they want.
- If they’re editing one pair of shades, they shouldn’t have to re-render all 5,000 PFPs, including all the ones without shades.
- If there were 300 PFPs wearing shades, they shouldn’t have to re-render all 300 of them either every time. Ideally, they’d be able to render a small sample size of their choosing, like 5, just to check that things are going well.
Hence, we introduced output filters. When rendering PFPs with the tool, the artists can specify:
- ID Range: The range of PFPs you want to render from — you can also specify an exact ID
- Body Parts: Which body parts they should have
- Random Sample Size: How many PFPs to render
And voila — now you can specifically render five of only those PFPs wearing shades! With filters like these, the rendering workflow became dramatically more efficient.
Another area of improvement I’ll only briefly talk about is the rendering time, simply because it was oddly straightforward. As mentioned, each 3D image took a few seconds to render, adding up to hours. When we decided that our PFP collection would be done in 2D instead, we could forgo all the time-consuming work needed for 3D renders, like lighting calculations.
In its 2D incarnation, each PFP took 0.5 seconds to render, adding up to about 42 minutes for all 5,000. Much more manageable!
Solving the Dependency Problem
If the code snippet with the output filters seemed a bit intimidating, well — that’s because it is. As much as possible, artists should not have to deal with code or terminals, especially if they have to continuously revisit the tool and tweak parameters. The moment anything is typed wrongly, they’ll be greeted with walls of error messages that require pulling the tech artist back in.
So I built a user interface for the PFP Generator, packaged with the code logic as a custom plugin (Blender calls them Add-ons) that any of our artists could add to their own computer.
This wasn’t wholly built overnight, of course. I started with the features that we knew we needed:
- Essential fields, like the input and output folders
- Basic quality-of-life fields, like the output filters
…and expanded the interface over time, as artists requested new features from testing. One of the key features that help to reduce dependencies is error checking and reporting. With the common error of misnamed art files, artists often find themselves stuck and unable to start any renders. Having the tool report exactly which assets are problematic helps them to fix their own mistakes, and work independently without tech art support.
I’m happy to say that in the last month of our PFP art production period, our artists managed the PFP rendering process without one bit of my intervention, sometimes re-running the renders dozens of times a day, encountering no technical issues. As a technical artist, I think that getting the art team to work independently and efficiently like this is pretty much the ideal outcome of my work.
To Wrap Up…
The PFP Generator project has shown me this: rather than thinking of tools as functions, with inputs and outputs, it’s more accurate to think of them as experiences. Tools are not just meant to process data, but also to serve as a living space for the user, however briefly that may be. And when it comes to living spaces, something that just works is never enough — you’ll want something that feels good.
With that said, I think there’s plenty of room for improvement in this tool development process. For one, I would have done a more thorough examination of existing tools before diving in to create our own. While organically expanding the PFP Generator based on our needs meant a very lean and focused toolset, I’m sure there are many other quality-of-life features we didn’t think of that others have successfully implemented.
What do you think? Let me know how we could have done better in our journey to creating the Big Bear Syndicate NFT Collection!