Automated tools are quickly making inroads into casual computing environments, solving progressively more complex tasks. However, these advancements still require trading reliability for convenience. Frequent minor failures are acceptable in casual environments, but critical systems cannot make the same exchange. The software systems that NASA develops could greatly benefit from machine learning technologies that have been applied to casual computing, if the software developed by learning algorithms could be verified. We propose to apply formal methods to machine learning, and specifically Programming by Demonstration (PBD). The existing technology readiness level is very low: no known verifiable PBD systems have been deployed and the existing research in the area is limited. The Phase I effort will result in publications and a prototype implementation of a verifiable PBD system using SMT solvers. Phase II will build on Phase I publications and prototypes to demonstrate increased verification capabilities through the application of more complex formal methods, such as model checking. Verifiable machine learning would allow for trainable systems that meet critical properties, but are still adaptable to specific use cases. Such tools could be re-purposed for multiple applications without incurring the development costs associated with manual automation techniques today.