Pixel art test: Photoshop (beta) AI tools

image

A new beta version of photoshop was recently released with generative ai tools (adobe firefly) natively supported directly in the app. I tried it with a simple pixel art scene.

Midjourney

I started by prompting a bee image in Midjourney. Just to get started. I wanted to see how much I could edit this image in photoshop and would it still stay true to the style.

image

From this selection of images I chose the bottom left one for this test because I liked how soft it looked and how muted the palette was.

image

pixelart scene with a small house next to a pond with a rowboat

Photoshop & After Effects 1st test

I then loaded this image in photoshop to test the image outpainting to see if the style would stay coherent.

To get access to these new Ai tools within photoshop, you need to have a CC subscription. Just load up CC and head on over to the Beta Apps tab. If you already are on a Photoshop beta, you might need to remove your old installation and download the beta again.

image

Now once inside Photoshop, simply make a selection and you will see the new generative AI prompt panel appear:

image

I took the newly generated image from photoshop to After Effects to do a non destructive pixelart rescale pass on the image.

Work timelapse from photoshop and AE

After this test I could not help myself from adding some more animations on top!

Adding layered animation

This got me thinking, how hard would it be to erase things from the image with the new tools in photoshop to generate more layered animation?

Another Timelapse from the layered progress.

I was very pleased with the way I could take a pre-existing image and remove parts, or extend the image and the new AI fill tool in photoshop was able to keep the style very coherent with the rest of the image. In these experiments I used an image from Midjourney s a source.

Editing Midjourney images with AI has always been difficult, as Stable Diffusion based tools did not manage to quite match the style. Dall-e2 was better at it, but I would have had to pay for yet another tool.

The final image

image

The fact that Photoshop now has built in a very capable generative AI system makes editing these images lightning fast. This little test took me a couple of hours to complete from nothing to a full short video clip!

image

In part 1, we used Midjourney and Photoshop to create the 2D locations, Now it is time to merge them together – in 3D.

Behind the scenes Timelapse

The part 1 was all about the 2D workflow, how to edit Midjourney generated images in Photoshop (beta). You can read it here before continuing with this part 2.

Editing Midjourney outputs in the new @Photoshop beta with built in generative tools. I already wrote a devblog post on how it slots in my pipeline: https://t.co/KweRVjGusA #AIart pic.twitter.com/uAkvL1TNxT

People were really into the generative tools in the new Photoshop beta. I posted a few tweets and especially the pixelart one got a lot of traction!

But, now on with the game protagonist’s home apartment!

Planning

A world about the location room selection. I knew that I needed 2 rooms that I could connect. I wanted to see the doorways in the image generations for the player to be able to click on them.I am not a fan of having the player walk towards the bottom edge of the screen to get to a new location. I think it is unclear when you do not see the pathway in the game screen.

The kitchen scene: raw Midjourney generation vs edited scene

image
image

It took me a while to get images I liked and in the end I did have to create the connecting pathways in Photoshop (beta) using generative fill on top of the Midjourney images.

The lobby scene: raw Midjourney generation vs edited scene

image
image

I made my life a little easier, by creating the kitchen location in a way that hid the other room behind a wall. I was sure I could have the kitchen back wall be visible from the lobby/bedroom, but I did not want the player to see the lobby, as I was unsure if the location would hold up when seen from a steep off angle.

image

The locations images next to each other before being turned into 3D

The kitchen was already in an angle that would lend itself very well to being seen from another room! I also chose images that I could see the camera moving between with a smooth leap without having to clip trough walls or move into the image too much.

image

Top down view of the location in Unity

I did not do any sketching or planning for the apartment layout beforehand, but based on my 3D modeling experience I always tried to avoid any rooms that would end up being problematic.

Shadow painting

Before moving on to the 3D modeling step, it was time to make the shadow passes of the original images. These shadow passes are then masked with a custom shader code to apply shadows on the image that do not look out of place.

image

The shadows are rendered using a custom shader that used 3D shadows cast by realtime 3D lights to mask out the shadow version of the location image. This makes sure I never have shadow overdraw with already shaded areas in the image and I can precisely control the shadow look

I do this step in Photoshop. First, I add a new layer, then duplicate the original image, apply high-pass filter on it and set it to overlay mode with clipping mask set to the newly created layer, which is in turn changed to darken mode.

image

Shadow pass of the kitchen room

Then by simply using the color picker to select nearby shadow values I paint out the highlights and lit areas in the image. Sometimes when you see the shadow pass image in use in-game, you realise that the shadows you painted are too dark. This is what happened in this location as well, so I had to do this shadow pass paint a couple of times to get it perfect.

fSpy + Blender

After all the 2D steps are done, it is time to model the locations. As always, I start by reverse engineering the camera in fSpy. When you have a 3D camera that goes with the image, it is very easy and quick to create simple 3D mesh of the location for 3D projection.

fSpy files for the 2 rooms

image

This location, along with the fluid sim factory scene, were a bit different: in this scene the camera would be moving! Because of this parallax shift visible for the player, I would have to add in a little more detail than usually. Normally I only need to model detail to where the shadows fall on the environment and on parts that occlude the player. But this time I also needed to make sure the parallax shift would look somewhat natural.

The scenes in Blender

Because of this added requirement, modeling this 2 room location took a couple of hours, maybe 5 or 6? I would have been a lot faster, if it was not for Blender. I am still learning it and simple things like merging vertexes requires me to do a google search. Having 20+ years of 3D experience with other software makes learning Blender so hard! That software is pretty unique and simply weird.

image

Secondary fridge UV in Modo.

I did do the separate texturing for the fridge insides in Modo. I simply did not want to waste an hour learning UV mapping in Blender when I could do it in Modo in 5 minutes. If Modo had the same project from view UV functionality where it does not fix the aspect ratio, I would be using modo for everything, the lack of this simple feature caused me to cut off my maintenance licence and go full Blender – even if it is taking me a while to learn it (that and the lack of native Apple silicon build).

Unity

But finally I had all the pieces of the puzzle. it was now time to set it all up in Unity. I had not tested this before, so I simply hoped that the scenes would match, both in color and in shape.

I was extremely lucky, albeit well prepared, and the rooms matched each other almost perfectly!

image

The kitchen location peeking from the lobby.

I added in all the usual sugarcoating: painted shadows, depth of field, grain, ambient lighting, fog and matching 3D lights to blend the character as well as possible.

Now it was on to the special sauce of this scene: the combined locations.

image

An isometric view of the apartment location with both cameras shown.

This was pretty easy to set up actually. I had these 2 rooms already laid side by side. Even though they were both from a different camera angle, using fSpy to sniff out the vanishing point brings them both into “normalised” 3D space. Meaning that once I have modelled the locations, they are not all rotated weird, but actually line up perfectly!

For navigation, I use invisible cubes instead of the actual room mesh, as I want to precisely control where the player can or can not walk to and how close they walk by different walls or props when navigating these tight spaces.

Camera move

The camera movement that would appear in the scene would be a dolly and a pan. The combination of these 2 would be better than a simple dolly, as it would expose the projection nature of the scene more. By having the second location painted at an angle would help by reducing the parallax of the transition a little bit. You can still see some stretching in the lobby scene, but that is something I can fix in the future by painting in some obstructed areas and adding more geometry.

image

The seam meshes between the 2 rooms.

I also added some black cubes in front of camera in the seam. These boxes would imply some cabinetry or other details that whip past the camera. These were necessary to hide the locations when they are the most stretched on the sides of the screen.

image

The camera trigger hotboxes with scripting visible in the details panel

The camera move would be triggered by hit-boxes. Using the doorway would force the character to walk to the other room, and when entering that room the player would hit an hitbox and the camera would follow. I wanted the move to be predetermined like this, instead of a free following camera along a set path to keep the images more static for better point and clickery.

I topped it all with a unique post process volume with massive vignette for this scene. The vignette really helped tie the location together and make the transition look smoother.

Scene interactions

In addition to the camera move, this would also be the first location with actual player / environment interactions: opening the fridge!

This feature is simply done by animating only the fridge door and using IK (inverse kinematics) to attach the player hand to the handle. The player itself is not running any animations, it is simply driven by the animating door.

image

Fridge door action list segment

I am trying to do as much of the scripting of the game using Adventure Creator’s tools. If I need to run custom code, I program plugins / additions for Adventure Creator.

In the past I did custom programming directly into AC source, but I have stopped doing it and use their API instead. As upgrading a custom Adventure Creator to the latest version was always a hassle.

Final gameplay screen capture of the location

This was a fun little experiment! And one that was just as successful as I had hoped! Now I have a much better understanding of what it takes to combine multiple 2D background into a larger location. I am much more confident in adding more of these in the future.

The next challenge would be to seamlessly blend from one open space (like a store front) to another with 2 camera projected scenes like this. I have no idea how to marry those things together. Even if it is easy in 2D, how would I carry that over to the 3D scene I do not know. Time to find out!

One response to “Home location – part 2: combining the scenes”

  1. Martin Hanuš June 4, 2023 Reply
  2. if you create video tutorial – full process you will be milionare

Leave a Reply

Your email address will not be published. Required fields are marked *

Comment * <div></div>

Name *

Email *

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to weekly newsletter

image

I had planned to keep the characters far away from the camera, as they were not the best quality. But as I have been working on these scenes the main character creeps closer and closer. So I needed to make him look better.

Jussi Kemppainen

June 5, 2023

I had planned to keep the characters far away from the camera, as they were not the best quality. But as I have been working on these scenes the main character creeps closer and closer. So I needed to make him look better.

The upgraded character in game.

Texture upgrades

The idea for my character pipeline was to use as much AI generated pixels in the final character as possible. I achieved this by prompting a character turnaround from Midjourney and projecting the AI generated imagery on the mesh UVs after modeling. A full breakdown can be read in this blog post, but I will do a TL:DR

image

This is the source image for the main character with the prompt “very old and weak man, cyberpunk point and click adventure game character, full body, model sheet turnaround, full color, front::4 view, side view and back view –v 4 –ar 3:2

The character stayed coherent because all of the angles were in the same image. I used this as a reference for modeling and then posed the character in 3 different poses (morph) for projecting the image on the geometry.

Front projectionSide projectionBack thirds projection

These 3 projections were then combined in photoshop to create the final texture.

My initial idea indeed was to never let the character get too close to the camera. But as the project went along and I generated more and more locations, I found that the character was quite often was up close to the camera.

image

The original version of the character in game

The low resolution, artefact ridden, flat texture begun to make me feel more and more uncomfortable.

Generative Fill and overpainting to the rescue

Around this time the Adobe incorporated the Firefly generative AI engine in the latest AI and I saw it as a way to easily get rid of most of the artefacts. I used a combination of generative fill and overpainting to fix the original UV.

With the use of generative fill I was able to very easily fix most of the issues of the texture. It works so well for removing unwanted features from an image while retaining the original style!

I did though do good old fashioned overpaint for most of the face, as generative fill did some pretty insane facial features! I also panted out the eyeglasses from the texture, as they would be modelled in later.

To get rid of the overly cartoony shadows I used the lighten blend mode to only affect the shadow areas.

As a last touch, I copied the document, flattened it and used Adobe’s AI upscale to upscale the texture 2x.

Original midjourney based texture2x upscaled version of the texture with cleanup

in addition to the edits on the base color map, I also added a metallic mask and a roughness mask. I wanted the few metal parts of the costume to appear metallic in game.

Adding more details

Now it was time to hop on over to Modo for some old fashioned 3D modeling for additional details.

Separating the Eyeglasses

First, I modelled a set of eyeglasses. They really looked horrible as this low res texture only garbage.

image

The mesh was not that complicated, but it really makes a difference.

OriginalUpgrade

Normal map scuplting + baking

One big issue I had with the game character was how little actual modelled details there were. The clothing for example had all these wrinkles, but the 3D lighting did not react to these in any way.

The easiest solution for this was to bake UV texture for the low detail character from a very high polygon, sculpted mesh.

image

The in-game character is very simple. But this is by design to keep the character creation fast and simple

To get a quick base for the high res sculpt, I simply took the in-game mesh and subdivided it a couple of times. I altered between faceted and smooth subdivision to get topology I was happy with.

image

The high resolution mesh is quite dense

Once I had the high resolution mesh. I changed to a textured view in order to see all the cloth details on the mesh.

image

Then using a wrinkle brush I painted over all of the wrinkles on the character clothing to add actual 3D deformation. I was not too careful with this sculpt. It was not meant to look pretty, just to provide some additional surface detail for the game lighting to grab on to. The video is in an untextured mode to show the sculpting happening. But I used a textured view throughout the sculpt process to see the texture details.

The final sculpt looked quite horrible when untextured. I pretty much only used the wrinkle brush and did not even try to do any real sculpting. For the beard and hair I created some interesting criss-crossing wrinkles to mimic hair strands. I think it worked decently.

image

Now that I had the high resolution mesh I used the low resolution mesh to bake the normals from the high polygon mesh to a normal map on the low polygon mesh. This map can now be used in Unity to make the game character appear higher resolution.

image

This is really basic process used in every game for decades.

image

The final normal texture

Even though the high polygon sculpt looks absolutely awful in the modeling software, used as a normal texture in game it looks pretty good! The game lighting now has much more detail to hit and the absence of the clipped shadows makes the character look a lot smoother and match the background style better.

I personally also like the added specularity on the metal part of the clothing and the eyeglasses.

I am not too happy about how the collar mesh looks though, so I am very likely to completely remodel it. That part of the mesh does not really hold up in closeups at all.

But I am really happy that this work got so much faster with the new version of photoshop. Updating the old character texture took very little time.

I am now much more confident with showing the character up close to the camera!

Leave a Reply

Your email address will not be published. Required fields are marked *

Comment * <div></div>

Name *

Email *

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to weekly newsletter