FREQUENTLY ASKED QUESTIONS

All you need to know about how Avodah works

Our software runs on machines powered by both Windows and Linux. It can be run on a local server or via the cloud. Other devices, including mobile devices, can access the software’s cloud mode through our web client.

All versions of Windows since Windows XP are supported, as are many major Linux builds, including Ubuntu and RedHat.

Yes, our software supports distinct roles, and users must sign in (authenticate) before they begin work. Admins have the most control, while other roles (like the translation engineer and reviewer) have access only to the portions of the program they need.

The admin requests and assigns translations, after which the others perform their roles.

Yes, all clips (both audio and video) are saved within the app. Additionally, you can configure the database to store your backed-up clips to a local database at your location or to the cloud using our cloud mode.

If you’re running the software in standalone mode, you can also back up to OneDrive, Dropbox, or even a USB thumb drive. Choose the method that you feel most comfortable with.

Our software covers many aspects of the translation process. It starts with administrative and tracking tools to track and assign project elements. Mapping and video creation and manipulation tools allow users to dynamically map video segments to text.

From there, reviewers add comments and translation engineers make corrections to files. Once a file is approved, the administrator marks the file to be added to the recognition library. All this happens within our software.

We’re using artificially intelligent neural networks and machine learning as a part of our work. Specifically, we’re using both convolutional and recurrent neural networks. Our recurrent neural network's processed output can be fed back into it with new inputs. Essentially, it educates itself as it works.

If you're interested in the technical details, we’re using long short-term memory with our recurrent neural network, and this is what enables continued learning through multiple rounds of processing.

Through the use of our artificially intelligent neural networks, we’re able to eliminate everything in the frame besides the human body, reducing overall computational time. And the lower the computational time, the higher the overall accuracy, because we have more time to run more of those recurrent calculations.

The proprietary aspect of what we do (patents are in progress) is what sets us apart. We’ve figured out how to do real-time action recognition across multi-framed media. While others can only slowly perform body analysis in archived video, we can do it live, in real time.

We are being recognized for our pioneering work, too. Six patents have already been issued, 4 patents have been awarded (expected to be issued), and 2 more patents are pending.

We built this software to work under the worst of conditions. It does work with run-of-the-mill footage, even that from a basic laptop webcam. We intentionally built software that would function in a low-cost, lower-quality environment so that we could keep the cost barrier low for our customers.

At the same time, our system also works with special purpose-built hardware that uses depth-sensing cameras. We are positioned well for both the average user with a standard webcam and high-end users with specialized equipment.

Our accuracy level is over 97% and often reaches 99.9%. Our AI is always improving, but we have the additional benefit of working with the Bible.

While it is one of the hardest books to translate, the Bible has been translated numerous times, which makes for a large useful data set. This data set is a major part of how we reach our surprisingly high accuracy rate.

We are not limited by any language. At this point we are working with American sign language and English texts, but people will eventually be able to select among all sorts of available languages.

In our full suite of tools that are coming soon, multiple users will be able to select different languages, which can be translated in real time, letting people with linguistic barriers connect with one another in a deeper way.