Simon Che de Boer

is creating AI based contextually aware PBR textures from any photograph.
Select a membership level
Koha Donation / Shout Out!
per month
Koha is a New Zealand Māori custom which can be translated as gift, present, offering, donation or contribution. Thank you!
Beta+ / Shout Out!
per month

Get early Beta+ access to deepPBR and a shoutout of course.

Beta+ / Shout Out / AO / Cavity
per month

Get early beta+ access. An honourable shoutout on the site / promotional content. Also early access to AO & Cavity Maps




per month

About Simon Che de Boer

How can I help?

So right now. We're trying to keep the lights on and keep the service running as long as possible before having to go down the commercial route. We must run Beta for some time though this is expensive as we have no other means of income right now. LIterally a passion project that we know will help the industry as a whole. As a patreon supporter you will get exclusive access to our beta programme for free. And we mean exclusive beta as there's also a public beta going on right now. Also bragging rights on the site as so everyone can know how awesome you are! Thank you.

What is deepPBR?

deepPBR is an intelligent image processing tool that aids in the creation of textures for use in the VFX industry. Our philosophy is guided by the standardized PBR workflow.

With deepPBR; an individual aided with nothing more than a smartphone or digital camera can now create all the varying textures required for modern game engines and 3D packages from a single photograph. Our deep learning algorithm extrapolates albedo, roughness, normals and displacement (heightmaps). We will be providing AO / Cavity / 8K supersampling soon for our Patreon supporters of certain Tiers.

Our four GANs have been trained to cater for most natural lighting environments, camera resolutions, chromatic, lens and noise profiles. In a nutshell, the system has a good threshold for natural variation and can remove baked lighting and other unwanted artifacts from some very challenging use cases. In addition, we can produce contextually aware normal maps and the rest. This is simply because we’ve shown the system hundreds of thousands of examples already
Our maps hold true to ground truths and independent validation data has assured us of this. As a result, we know our output is highly contextually accurate to real world measurements and real world extrapolations. Our system has learnt to know what it is in fact looking at and has taught itself the fundamental behaviour of light.

What are the ‘Best Practices’?

For best results, we highly recommend that you think like a texture artist. What would be a desirable reference image if you were doing this the old fashioned way?

  • Keep your perspective parallel to the subject matter. Nice level shots with a well defined plain goes a long way. (our automatic perspective correction will do the rest soon)
  • Overcast lighting environments are preferred. A bit like when shooting on a photogrammetry shoot, light that is not too harsh always helps, though not strictly necessary. Our delightful delighting tool is getting better every day.
  • If you’re operating with a high resolution camera, keep scale in mind. 4K (soon to be 8k) are pretty large textures and therefore in most cases you’re going to want to be a good distance back from the subject.
  • Camera operations are recommended to be in ‘Auto mode’ or equivalent. We catered our system to deal with ISO noise, shutter-blur and the rest.
  • RAW photos are preferred, though we have had no issues throwing some very random images straight from google photos with some stellar results. Garbage in / Garbage out though as they say.

Current limitations in Early BETA?

Our models are still running under Beta. Especially our normals. We had far better results but we cooked the model. Basically if you leave it on for too long and run through to many cycles (EPOTHS), your done! And well we forgot to backup. So we are retraining right now as we speak. We have, what we are appropriately calling version 2.0 of our models. Once this is live. You will have the option to reprocess.

  • Resolution has to be greater than 1024x1024
  • Current output is fixed to 4096x4096
  • Maximum 50 uploads per Jobhat can I expect in the near future?
  • Automatic Perspective Correction: We still expect our members to follow ‘best practices’, though this will correct that last 10%. Think perfectly aligning bricks on a wall.
  • Advanced Seamless Textures: Our current method is not dissimilar to that of Photoshop ‘content aware fill’, though we feel there is much room for improvement. We will be introducing a GAN eventually that has advanced in-painting methods.
  • Additional Texture Maps: We will be introducing overtime such maps as Metal / Cavity / AO and others. We will prioritise these based on member request. So give us your thoughts.

Is there a master plan?

  • deepPBR MarketPlace: Members of deepPBR will have the option to opt in their submissions to a greater user database. Other members and the greater public will have full purchasing / ratings access to this database. Royalties will be provided. Basically the Turbosquid of textures. This information will also aid our process of human assisted deep learning.
  • API Integration: We plan to open up an advanced API to 3rd parties. This will be determined on a case-by-case basis. is a huge fan of photogrammetry, we are very aware of how useful this tool would be within the photogrammetry community.
  • Blackbox Solution: An ‘air-gapped’ solution will be offered to the major studios and production houses under a fixed licensing agreement. These will be custom build NVIDIA based units. Periodic updates will be provided in physical form. The reason for this is simple, it’s security.
  • Temporal deepPBR: Hold my beer. Okay! What we can do is we can use this on video too. What this means is that we could place someone on a green screen or really any environment and extrapolate detailed normals and delit so we could then relight this person in any scenario. This will be an absolute game changer for Virtual Studio Production and/or traditional production houses.

Can I see some more examples?

$155 of $2,500 per month
Monthly Site Expenses

Simply put. We wish to make the deepPBR beta project run as a free service for as long as we can. We wish to grow this community and the industry as a whole and allow artists alike to have access to these incredibly valuable tools. Currently our overheads are basic as this has very much been a passion project for us that we've wanted to see hit the mainstream for a very long time. Though, we do need to keep the lights on. We have been fortunate enough to have most of our hardware and cloud compute sponsored. So as long as we can deal with the basic day to day. We can continue to improve. Thank you.
1 of 1

Recent posts by Simon Che de Boer