How to make a 3D view from a photo. Volumetric models from photographs. d simulation based on a real object

At first glance, it is impossible to create a three-dimensional image using a conventional digital camera, because a three-dimensional image must contain much more information than a two-dimensional image carries. However, with the help of special applications, you can "invent" the missing information about the third dimension and make a three-dimensional model out of any photo. We will talk about applications for quickly creating 3D images based on photos in this review.

Since there is no universal algorithm for turning a photo into a 3D model, the best way to create 3D objects is to model manually. But this process is very complicated and requires the user to have the skills to work with 3D editors. Even experienced 3D designers try to bypass modeling from scratch whenever possible, and are constantly looking for alternative way fast modeling. These searches often lead to the appearance of useful utilities - constructors and generators of three-dimensional models. Such tools make it possible to quickly create complex objects without wasting time on tedious modeling. We will talk about applications for quickly creating 3D images based on photos in this review.

FaceGen Modeller - 3D Head Builder

Despite the fact that FaceGen Modeller is more like a computer game, it is actively used by game developers when creating low-poly character models. The principle of operation of this program is simple - with the help of various settings for a person's appearance, the user makes a three-dimensional identikit, achieving, if possible, the maximum similarity with the original.

In the process of working with such a three-dimensional human face constructor, you can not only control his gender, but also set the degree of “femininity” or vice versa, “masculinity” in the appearance of a three-dimensional character. In addition, you can specify the age of the model, control the size of the cheekbones, make the chin more prominent, and use the model asymmetry settings if the 3D character has disproportionate facial features.

When adjusting any parameters of the head, in the FaceGen Modeller preview window, you can observe in real time the changes that occur with the model.

Sometimes, in the process of creating a 3D head, you may feel that something is wrong in it, but it is difficult to understand what exactly. That is why the program has a built-in generator of similar appearance. The program randomly creates a set of images of similar "people" with the expectation that one of the proposed variations of the model will be more like the desired result. If one of the alternative combinations of head parameters is suitable, just click on the desired image and the application will automatically change the model settings.

The program also allows you to generate a random model for one of the types of the human race: European, African, Asian, etc. It can also generate a person's face, in whose appearance there are features characteristic of several races.

If you have never done an identikit before and can’t find the right model parameters, don’t worry too much. The fact is that the process of creating a three-dimensional model does not greatly affect the final result, since anatomical inaccuracies are not as conspicuous as an incorrectly applied facial texture.

And yet, no matter how hard you try, you will hardly be able to achieve realism, in which the model can be confused with the original portrait of a person, with the program settings alone. The point in the process of working on a three-dimensional project can be put only after the model acquires a recognizable texture. By default, the program uses a library of ready-made textures with different skin types for different types of people. These textures are not very realistic, and are intended more so that the 3D artist can roughly imagine how his work will look after applying a real photo texture.

In other words, when creating a three-dimensional model, it is necessary to “glue” an image of a face to the surface of a three-dimensional head.

A person's face can be taken from any photograph (of course, from a portrait). The program allows you to make a quick overlay of a face from a picture using only one photo - in front, as well as an exact overlay, in which, in addition to the front photo, you can use two more additional images in profile. In order for the texture of the face from the photo to fit exactly on the surface of the 3D model, after loading the image of a person's face in the Photofit section, you need to adjust the control points of the model. At the same time, a test image of a person's face is shown on the screen, with the nodes marked. The task of the one who creates the model is to indicate the same points on the uploaded photo - eyes, chin, etc. After doing this simple texture calibration, FaceGen Modeller will start doing the calculations, projecting the drawing and determining the final look of the textured model. The procedure for correctly applying the texture to the created model takes some time. Depending on the complexity and hardware capabilities of the computer you are using, this can take up to five minutes or more. Only after that, the 3D person will become recognizable - birthmarks will be visible on his face, the texture of the skin will become natural, familiar wrinkles will appear, etc.

To make sure how realistic the head model is made, you can try to “revive” it using the settings of the Morph section. With the help of a group of sliders, you can make the 3D person take on various facial expressions - express surprise, anger, smile, wink, etc.

The created head can be saved as a BMP, JPG, TIF, or TGA bitmap (for example, to use it as a 3D avatar). In addition, the completed work can be exported to one of the 3D formats (OBJ, 3DS, LWO, XSI, WRL, etc.) to be used with other 3D editors.

iClone 4 - create a 3D model from a photo in a couple of minutes

One of the easiest methods for creating a three-dimensional model of a person's face from a photo is offered by the iClone program. You only need one shot to work, but it must be a high-quality portrait, where the person is shown close-up. First you need to select a character in the library of the program, on whose body the resulting head will be dressed. After that, the Head tab will become active. By clicking on it, you can upload a photo from your hard drive, after which a wizard will start that will help you convert the portrait into a 3D model. The work of the wizard is divided into several stages.

To begin with, it is proposed to work on the picture and improve it a little by correcting the color rendition, orientation and cropping everything superfluous. In the next step, iClone will already show a pre-generated 3D head. By defining the boundaries of the face with the help of special markers, you can achieve the maximum similarity of the 3D model with the 2D prototype. The next step is to adjust the position of the face in the photo, and then it will be necessary to correct the markers installed by the program for the eyes, nose, mouth and eyebrows.

Once saved, the resulting 3D head can be edited using iClone's many tools. You can make the character a suitable hairstyle, make him move and talk. Finished project can be saved in various formats, including Flash animation.

FaceShop 5 - get a 3D head based on a real photo

FaceShop is another program that can create 3D heads from just one photo. As with iClone, the photo must be sharp enough, otherwise some parts of the 3D model may not be drawn well enough. The creation of a 3D face is based on key points. After loading the image into the program, the user is prompted to crop, separating the face from other objects that may also be present in the photo. After that, it is necessary to indicate key points in the photo - the corners of the eyes, the lower border of the chin, the middle of the forehead, the corners of the mouth, etc. The "smart" wizard itself places them after the first three points are specified by the user, however, as a rule, some of them need to be moved, choosing a more accurate position for them.

In the next step, FaceShop will generate a model of the desired shape, and also apply a texture to it - the original 2D image. In the program window, it will be possible to rotate the model, examining it from all sides. If errors occur, you can return to the previous stage and correct the control points. Depending on the features of the original photo, the face may turn out better on the right or left side. In this case, you can use the Mirror tool by cloning one side of the face to the other.

If not the entire surface of the model has a suitable texture, the Brush tool, which works like the "Stamp" in Photoshop, will help. It is enough to indicate the source, and then draw over the place of the model, the texture of which needs to be improved, and the errors will be corrected. The finished project can be saved in the OBJ format, which is supported by all major programs for working with three-dimensional graphics and animation.

Strata Foto 3D CX 2 - turn a stack of photos into a 3D model

Unlike FaceShop and other solutions discussed above, Strata Foto 3D CX 2 can create not only 3D heads, but almost any 3D object. For the application to work, you need to remove the object whose model you want to get from different sides. The more photos you take, the better the result will be. It is recommended to shoot objects on special sheet for calibration, which can be printed directly from the program. This gives Strata Foto 3D CX the ability to determine the position of the camera in 3D space for each shot. Built to support popular camera models, Strata Foto 3D CX attempts to correct camera imperfections and thus improve model accuracy.

Model creation is fully automatic. First, the approximate geometry is generated, then the details are added and the texture is applied. The user can watch the process in the preview window and stop it at any time. The result can be corrected directly in Strata Foto 3D CX or finalized in Adobe Photoshop (using a special add-on). The project can be saved in VRML and 3DS formats.

Free 3D Photo Maker - make a stereo image

AT recent times devices capable of transmitting images and video in stereo format are gaining wide popularity. To view such content, in addition to an appropriate TV or monitor, anaglyph (stereo) glasses are also required. Many professional video editors and 3D programs offer special means to save projects in stereo 3D format. However, to get a three-dimensional image, it is not at all necessary to understand the intricacies of working in a complex application. You can take a 3D photo even at home using a regular digital camera or smartphone.

The free Free 3D Photo Maker utility creates a stereo image based on two photos of the same object. In order to get the desired effect, it is necessary that the pictures were taken with a slight horizontal shift of the camera lens (approximately at a distance of 5-7 centimeters). Load both photos into the program, select one of the five anaglyph algorithms, and after a few minutes, Free 3D Photo Maker will display the finished stereo image. It should be borne in mind that by default a stereo image is generated for viewing with red-cyan glasses. If you have yellow-blue glasses, you need to change the algorithm for creating a 3D image, which the program uses by default.

Project Photofly - create 3D from photos "in the cloud"

Creating 3D models based on photographs, as a rule, requires quite serious computing power. Not so long ago, Autodesk offered an interesting solution to this problem - in the experimental laboratory of Autodesk Labs, the Project Photofly service was launched, which transfers all work on creating a three-dimensional model based on a photo to a server. The service works according to the following principle: the user sets free app Photo Scene Editor on your computer, through it uploads photos to the Autodesk server, where they are processed and transferred to the computer in the form of a finished model.

Working with Photo Scene Editor, the user can correct the result, as well as save it in DWG format, which is readable by applications such as Autodesk AutoCAD, Revit, Inventor, etc. To work with Project Photofly, it is not necessary to have professional camera and a tripod - you can limit yourself amateur camera. To get the most accurate model, it is desirable to take as many photos as possible (for example, if you want to get a 3D model of a building, then you will have to go around it with a camera, snapping at least forty pictures).

Written for CHIP magazine

Sergey and Marina Bondarenko

The purpose of this article is to illustrate the use of tools known in the field of design automation for restoring object models from photographs in bench modeling.

What is the restoration of drawings or 3D models of an object from photographs?

It is known that from a photograph it is possible to calculate some geometric characteristics of reality, which is captured in a photograph. More specifically, if we have a picture taken with a lens with a certain focal length, and in this picture the point of intersection of the lens axis with the picture plane (the center of the picture) is known, then we can very accurately calculate the angular distances between the center of the picture and any point on the picture or on the object (product) shown in this picture. And if there are several photographs in which a certain product (aircraft, tank, ship, building or parts thereof) was taken from several different points, then certain algorithms can be used to calculate the relative position in three-dimensional space of various points of the product. Then, applying simple geometric transformations of rotation and scaling to the calculated coordinates of points in space and connecting the calculated points with the corresponding lines and planes, you can eventually get a 3D (three-dimensional) model of the product, and projecting it onto the desired planes, get projections - drawings of the product.

The science and technology of restoring 3D models and product drawings from photographs is called photogrammetry. Numerous programs are available to automate this work, such as REALVIZ / AutoDesk ImageModeler ,
PhotoModeler and others

Why restore drawings or a 3D model of a product from photographs?

There are times when there are only photographs. For example, a certain architectural monument was taken at one time by a photographer from different points, and then for some reason it was lost and no drawings and sketches of it remained. In this case, photographs are the only source of knowledge about the product, and drawings or a 3D model can only be obtained from them.

Another case in the field of architecture is the need to obtain drawings or a 3D model of an existing building, if there are no drawings and other materials for it that allow one to do without photogrammetry, and the shape and complexity of the building make the actual measurement of all parts of the building, if not impossible, then extremely laborious. In this case, obtaining drawings or a 3D model from photographs can be the most simple solution. The difference between this case and the previous one is that the photographs can be taken specifically for the purposes of photogrammetry - and therefore more suitable and of better quality.

There are cases - there are many of them - when the available drawings of a product (aircraft, tank or ship) are built approximately, "approximately" from photographs and drawings and do not include more or less reliable digital and other data "from the manufacturer", allowing more or less reasonable judgment about dimensions, proportions and contours of the object. There are many such cases; The "drawings" of different products published in popular publications often differ so much from each other and differ from the product itself that it is not possible to use them to build a bench model-copy of the product, or one has to guess which of the found drawings is more reliable. In these cases, the available photographs of the product can serve to obtain data that allows one to judge the accuracy of certain available drawings of the product, and if there are many such photographs, they good quality, they can also be used to build a 3D model and product drawings.

An example of restoring a 3D model and product drawings from photographs using REALVIZ ImageModeler

I will give an example of restoring a 3D model and drawings from photographs using the example of a simple product - a canopy for a Yak-9T aircraft cockpit. The reason for my turning to photogrammetry in this case is quite general: I have in my hands several drawings of this aircraft, the projections of the visor on them differ significantly, and none can reasonably be chosen as the most "similar". The visor on these drawings is simply drawn more or less similarly; it is impossible to build a bench model claiming acceptable accuracy from them.

On the other hand, there is good photographic material that can be used for photogrammetry. This is first of all a few close-up shots of the visor from the famous film " Operation_of_aircraft_Yak 1, 7, 9. Instruction_to the pilot"1943, as well as several more or less clear photographs from other sources from angles not presented in the film frames.

We select the appropriate pictures and bring them to approximately the same size. Since our product is strictly symmetrical, we “mirror” some of the pictures and add mirror copies to the set - thus, in our set there are pictures taken from two symmetrical points, although in fact we don’t have them.

We use an old but functional version of REALVIZ ImageModeler. It is good because it is a separate program (the latest versions of ImageModeler are already part of AutoCAD and require it to be installed).

We load all selected images in ImageModeler. Each picture is associated with a separate camera, which has its own, unknown to us focal length and frame center - we choose this method of loading, since we do not know how the pictures we have chosen are actually taken and how they are cropped. In other words, we simply tell the ImageModeler that we don't know anything about how the photos were taken - thereby giving it the right to determine all this for itself (and it knows how).

Next, we place named marks on all uploaded images - the so-called calibration markers. Each named marker corresponds to a certain point of the product - most often it is some angle that is clearly defined on the pictures in which it is visible, or the intersection of straight lines (we drew such intersections on the pictures in advance). On each picture, we try to put all the markers, the places of which are visible or reliably guessed on it. As the markers are placed, the ImageModeler performs the necessary recalculations, tries to calibrate the cameras and notifies us that its recalculations completed successfully ("Cameras have been successfully calibrated.") or not. In case of failure (which means that the ImageModeler cannot understand from the current position of the markers where and how the pictures were taken), we specify the positions of the markers until we achieve a message about the success of the calibration.

We specify the position of all markers sequentially until the lists of images and markers in the left part of the ImageModeler window "turn green". The green color of the icons of images and markers means that the markers on the images are placed "well" - as a result of calculations, ImageModeler determined that the spread of their calculated positions in space over all images does not exceed 3 pixels (with the size of images approximately 1200 x 800 pixels). If you wish, you can tighten this restriction - specify the maximum deviation of 2 or even 1 pixel and continue to refine the position of those markers that are colored yellow or red, trying to "green" as many markers as possible. This work is rather tedious, requires some experience to right choice marker that should be dealt with first. It ends at the moment when either all the markers are green, or nothing can be improved.

As a result of this work, the ImageModeler has a set ("cloud") of points in three-dimensional space, each of which corresponds to one of the markers. We unload this "cloud" into a file of a suitable format (for example, DWG) and import it into a 3D modeling program. We see at first glance a shapeless "cloud" of points, which, after some rotation, examination and comparison with photographs and markers on them, we manage to "disassemble" and understand which point corresponds to which marker. Next, we orient this "cloud" so that the "visor" takes the desired position in 3D space (the plane of symmetry coincides with the YZ plane, and the back plane of the visor coincides with the XZ plane)

And, at last, the most essential after orientation - scaling. ImageModeler does not know, of course, what are the distances between markers in reality, and sets them in the required relative values ​​based on some arbitrary base metric. For scaling, we take the dimensions known from other sources - the height of the visor from the lower sections of the sidewalls to the top and the width of the visor between the lower sections of the sidewalls:

And we get a more or less believable 3D model of the visor; its projections on the plane are three projections of the drawing. Import the resulting 3D model of the visor into aircraft model, which the hood and the upper part of the fuselage are already ready; aligning the top of the visor with its calculated position, we make sure that the visor "fits" well in its place: the lower corners of the binding (indicated by red circles) almost exactly "lay" on the surface of the fuselage:

What happened?

Looking at the 3D model of the canopy along with the fuselage and other parts of the canopy, we are convinced of the "similarity" - our canopy is very, very similar to the available photographs. The same conclusion follows from a comparison of the projection from the side with photographs:

It can be seen that while our visor is quite similar to the photographs of the Yak-9T, it differs significantly from the visor of the famous Yak-9 by I.I. Kleshchev, now exhibited in the Zadorozhny Museum (lower part of the last photograph). As an explanation, it can be assumed that the visor on this aircraft is non-standard and borrowed, for example, from the Yak-1B; "abnormality" is also indicated by the fact that the front armored glass in this visor is clearly installed incorrectly.

In conclusion, here are the final drawings of "my" visor, "taken" from the 3D model:

conclusions

The restoration, and visually very accurate, of the 3D model and drawings of the product was quite successful, and in this case just a few old and very bad pictures. Accuracy is supported by the fact that ImageModeler was able to calibrate the cameras well based on images with our markers - this is considered the basis for the assertion that it was able to accurately determine the position of the markers in space, and hence the spatial model of the product. Of course, if the photographs were better and there would be more of them, and even more so if it were possible to enter the conditions for their shooting (focal lengths and other parameters) along with the pictures, the accuracy would be greater; and almost absolute accuracy could be achieved if, before shooting, the camera is calibrated using the calibration tools built into the ImageModeler and then the product is shot with the same camera with exactly known focal lengths for each shot (the necessary camera data can be recorded in the image headers). However, for the purposes of bench modeling, the resulting 3D model and drawings can be considered more than sufficient, and their accuracy is noticeably better than in drawings from public sources.

I accept orders for the production of 3d models from photographs, sketches, drawings, descriptions, etc. The model will be made in 3DsMax and Zbrush. You will receive files in obj, stl, 3ds format. They are great for cutting and printing. A model from a photo can be in different versions, such as a simple portrait, a portrait with hands, a full figure. Various poses, possible addition of various decorative elements, such as: frame, patterns, base. The price depends on the complexity of the intended product.


Sample 3d model of the bas-relief, you can download for free

Model in the format: 3ds, obj, stl

Number of polygons: 805 236

If you want to order such a portrait send your request

You can also write an email [email protected] website
Or use this form, enter your e-mail and I will answer as soon as possible

Examples of work performed:

Comments (12)

Rinat
22 07 2019 Good afternoon, where can I find out the prices for your work? How much will it cost to take one photo or face? Rinat
22 07 2019 Good afternoon, where can I find out the prices for your work? How much will it cost to take one photo or face? Denis
25 10 2018 Hello, can a 3D model of the head be made for later use in Daz3d? Elena
28 06 2018 Hello! Will it be enough to create a 3d head with only one frontal photo? How much would it cost to create such a drawing? Tigran
12 02 2018 Good afternoon, Is it possible to make a 3D model of a full-fledged three-dimensional sculpture or bust from a photograph of a character? Zhibek
08 08 2017 Hello! Angelina Jolie - how much does it cost to create such a 3d portrait. Is one photo enough for you? Anton
16 03 2017 Alexander, good afternoon. I do not create models for a laser engraver. As far as I understand this is a 2d drawing, I make 3d models. If necessary, you can see free 3d models on the site for review. Alexander
19 02 2017 Good afternoon. We are engaged laser engraving. Have you had any experience in making models for laser engraving? how much does it cost to make a model? You can send us a sample file so that we can try engraving in brass. Arkady
30 04 2016 Hello, I'm interested in the cost of a bas-relief from a photo. Sergey
01 04 2016 Interested in making a stylized bust. If you can, please contact me Anton
30 01 2016 Alex, the creation of such a 3d portrait costs 3000 rubles, the average time is 3 days. Alex
16 12 2015 hello, I would like to know the cost of such a portrait and the timing of the file. Thanks
Inventor Adam Savage used to host The MythBusters. Now he continues to delight fans with the creation of original devices in the Savage Builds project, and in the next issue of the program, the enthusiast demonstrated a working replica of the legendary cannon from Luc Besson's film The Fifth Element. Read more
  • For over a century and a half after the invention of the bicycle, propulsion was provided by the muscular strength of the user's legs as they pedaled. However, the era of high technology is making changes even in such a traditional mode of transport. The developers of the Dutch startup Byar Bicycle proposed an original design of an electric bike... Read more
  • Until the beginning of the 21st century, compact cassettes were one of the main storage media. They first appeared in 1963, but then they were forced out of the market by CDs, and a little later by USB flash drives. However, Japanese manufacturer Nagaoka Trading does not agree with this trend. The brand introduced several variants of new audio cassettes. Read more
  • The indefatigable inventor of the Zapata flyboard, Frankie Zapata, who recently demonstrated his invention at a military parade in Paris, was unable to fly across the English Channel. Read more
  • The main requirement for modern laptops is compact size and long battery life. Launched in China, the Honor MagicBook Pro 16.1-inch laptop is no larger than standard 15.6-inch laptops thanks to its full-screen design. Read more
  • This method can be called with some stretch. What is his idea? Everything is very simple. We take a number of photographs of the same object, and in such a way that the images from two different shooting points overlap slightly. Based on these data, it is possible to build a three-dimensional model of the object being filmed. If you do it manually, you can get close to ideal results. However, this will take a lot of time. In a good way, many other factors must be taken into account: the angle of inclination and the position of the camera, image distortion due to the imperfection of the optics, and so on.

    Having estimated the volume self made, many immediately lose all interest in such undertakings. But it is not all that bad. In the software world, there have long been programs that significantly facilitate this process or at least automate a number of actions. But they also have a couple of notable drawbacks. Firstly, for amateur experiments, they cost a lot. Secondly, these programs sometimes require significant computing power. What to do? The solution is very simple - use specialized sites or, if you want, cloud services.

    This approach saves us from a lot of difficulties at once. It remains only to take suitable photos or shoot a video, upload it all to the server, and get an acceptable result at the output. The disadvantages of this method are obvious: we cannot control the process of creating a 3D model and we have to wait while the original data is being processed. However, to test the methodology, you can come to terms with this. Moreover, the services we are considering are completely free.

    So, the first service that will be discussed is the development famous company Autodesk called 123D Catch. This program is essentially a client to a cloud service. Through it, we send photos to a remote server, where all the processing takes place. To download the program, you need to register or log in using account Facebook. The project is still at the beta testing stage and is not stable, so you will have to put up with some of its glitches and brakes. After installing the program on a PC, you can start taking pictures of the object we need. To get started, it is highly recommended to watch the tutorial videos. The first of them contains basic tips for the correct shooting of objects.

    In principle, there are not so many of them (advice or requirements). Transparent or shiny surfaces and glare should be avoided. It is not recommended to get repeating textures into the frame. Also, the object must always be motionless - you yourself must move with the camera around it. Again, frames should overlap a little. Well, you need to watch the lighting. It is desirable that it be sufficient (no noise in the frame) and more or less uniform. The use of flash is not recommended. Whether to use a DSLR or a simple soap dish, or even a camera in a smartphone, is not so important. If only the frames were not blurry, and the subject remained in focus. And one more nuance: there is no point in taking photos with a full resolution of a dozen or two megapixels. 3-4 megapixels will be enough. It is more important that the subject takes up as much of the frame as possible. Also, never use telescopic or fish-eye lenses!

    In any case, everything "comes with experience." You will have to shoot more than a dozen photos before you get a decent model. Practice first on some simple geometric shapes located on a uniform background. By the way, an attempt to recreate complex surfaces like fur or hair will almost certainly fail. In the meantime, let's briefly consider the work with the program. Everything is very simple here. After starting, click Create a new Photo Scene and select the photos that will be used to build the object. To begin with, it is worth taking a set of 15-20 frames taken from different angles, but at the same angle to the surface on which the object stands. Then you can add other photos to the scene.


    Click the Compute Photo Scene button, enter your name and e-mail and agree to the terms of use of the service. Be sure to enter a working email address, as notifications and links to finished scenes will then be sent to it. However, if you have the desire and time, then in the next dialog you can choose Wait instead of Email Me. Then the program will minimize and upload all the photos to the server in the background, wait for the result and activate again. On average, uploading files to the server takes several minutes, and the time for issuing the finished result is no more than 10-15 minutes. If you add a few more photos to the scene or change it, then all the files are re-uploaded to the server and calculated, which is somewhat annoying.


    After some time, the finished scene will be loaded into 123D Catch (in the settings you can set its quality, that is, the detail). This scene needs to be saved as a small 3dp file that can be opened on another PC as the photos and everything else will still be downloaded from Autodesk servers. The finished scene can be exported to some popular formats (DWG, OBJ and others). Ideally, the service will automatically “stitch” photos into a 3D model. But he also tends to make mistakes. Raw frames are marked with an exclamation point icon. They can be removed or manually adjusted by right-clicking and selecting Manual Stitch.


    In the window that opens, mark the same points (preferably all four) on the three selected photos. After selecting a point on one or two photos, the program often itself suggests a point on the third. You can click on it and accurately position it at high magnification. After the correction, the model will again go to the cloud for calculation. In addition to exporting an object, you can make a video by setting the camera positions and the delay on each of them. For example, flying around the model or a panorama (if you were filming some area, being in its center). The video is already encoded on the client machine, not in the cloud.


    Now let's move on to two other free services - Hypr3D and My3DScanner , which differ from 123D Catch in slightly less detailed 3D models in the output. Both require mandatory registration and do not imply any intervention in the rendering process of the model. That is, after uploading a photo or video, you just have to wait for the finished result or, if you're not lucky, an error message. Therefore, it is much more important here to immediately make good source photos. By and large, the shooting recommendations remain the same as for the 123D Catch. But there are several important nuances. Review the list of the most common mistakes with examples of what not to do.

    An example of shooting an object for Hypr3D

    Again, transparent or glare objects, as well as moving objects, non-textured backgrounds (or with a repeating structure), lenses with strong distortions, very uniform or moving backgrounds, out-of-focus and blurring in the frame, uneven and insufficient lighting, strong noise in the photo and etc. Shooting should also be carried out when moving the camera around the object. As a result, you should get somewhere from 30 to 60 frames with the same camera angle. In short, here, too, you have to suffer before good 3D models start to turn out.


    Let's move on to the features of working with these services. Let's start with Hypr3D. Click the Upload button and proceed to upload the source files. We select the necessary images or videos, drive in the name and tags for our project and click Submit for processing. This service has one nuance: it works without problems with video shot on the iPhone / iPod Touch (usually a half-minute video with a “fly around” of the object is enough), but for all others it requires a thm file. However, the service was able to successfully feed MOV and 3GP files captured by smartphones.


    Typically, rendering a model based on photographs takes no more than half an hour. But working with video files takes much longer, since in fact they are divided into separate frames, which in itself takes at least 20-30 minutes. The finished model can be viewed along with textures in the form of a polygon mesh or point cloud. By default, it is exposed to the public for visitors to the service. Scenes are exported in DAE, PLY or STL formats. The latter can immediately be sent to be printed on a 3D printer. Hypr3D also has paid options for editing a model and preparing it for 3D printing, as well as actually creating a model from metal, plastic, ceramics and other materials.

    Models can be viewed in any WebGL-enabled browser as an untextured object or as a point cloud. Export is only available in OBJ and PLY formats. My3DScanner has a funny glitch: it often quite successfully creates a good model of the captured object, but at the same time covers it with a kind of “dome” stretching from the edges of the stand on which the object itself stands. In order to save money, the creators of the service store finished models for several days, and then delete them. So take care of saving your "digitizations" in a safer place.

    For all three services, the quality of the finished models, although satisfactory, still requires manual refinement. Basically, you will have to “cut off” the extra parts or the substrate. At least most of the work on the digitization of the object takes on automation. After experimenting with "cloud" services, you can practice working in standalone applications like 3DSom (Strata Foto 3D CX), iModeller, PhotoModeller and others. Play with them more, but the result is much more impressive. Well, then you can completely plunge into the world of real 3D modeling. But that's a completely different story. Good luck!