![]() ![]() With AI products the community is debating what is more practical and pragmatic approach to certify AI systems.ĭue to evolving nature of AI products, it is possible the guidelines will be more process based rather than around the product’s functionality itself since those are going to be a moving target. ![]() Regulatory requirementsįor safety critical devices regulatory bodies provide practices, processes and guidelines governing how safety approvals will be given. The talks and questions during the panel discussion unearthed some interesting points which I feel might be very helpful for teams exploring to test AI systems. Analytics – Creating semantic models and running analytics / or feed data into machine learning algorithms.Data lake & data warehouse – Creating data / domain models and normalized tables.Data ingestion – Getting data from different sources.From gathering this data till feeding it to an AI model, data passes through lots of stages. ![]() What’s a data pipeline?Īll this usually starts with ‘Big’ data, which means the variety of data, speed of processing and size of data plays a big factor. In my segment I talked about some fundamental concepts like data pipelines, data quality across these pipelines and some quality checks to look out for across different stages. In case of machine learning algorithms, the biggest factor is the data used to train the model – and IMHO that’s where testing should start from.īefore loads of data can be fed into the models, all that data needs to be gathered, cleansed and modelled. ![]() Folks when thinking about testing AI systems start debating how to get into that black box neural network to figure out what’s going on and how to test it? The challenge off course, these are evolving programs which no one directly controls. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |