Been a couple of weeks since I updated this blog, it’s been pretty hectic to be honest. Ever since we took a slightly more formal approach to the Agile development process it really feels like we have been making progress. Until a couple of weeks ago all three students had been working on front end modules (I was given the ‘booking’ module, still not complete but edging ever closer – its a loooong story that seems to have a target that constantly moves away from me). However now we have now split slightly, there just wasn’t enough front end interface work to keep three people happy largely because Data models and APIs need to be constructed to ‘hang’ the front end off.
One student is still working on front end stuff, one on testing and myself I have been given the briefest of briefs introductions to Python and handed the task of creating some of the data model and APIs.
It took half a day to get my head around how all this worked, and half a day to start to even understand the Python syntax (yeah I know its meant to be intuitive, but to a C# native its madness) , I’m getting there with Python 🙂
It’s now that all of the work that the senior devs put in at the start of the project really starts to pay off. They implemented a couple of systems, SQLAlchemy and Swagger.
SQLAlchemy does a lot of stuff I’m sure, but the only part of its magic that I need to understand right now is that it allows us to create a model driven database. What the hell is that? I hear you cry….. Well, let me explain. Often at NMIT we have used ORM (Object Relational Mapping) this is a system that directly maps database tables to OO Objects, in short in means that a whole database can kind of become a giant class in OO and we can directly map to data within it (I know this is a bad explanation – but its the best I can do right now) We have used .NET Entity-Framework a number of times to achieve this goal in our Windows form applications, generally we have created a database and then generated an entity framework from it. The approach at Datacom is different, we are using a ‘model driven’ approach, so basically we create the Python classes, whenever we run SQLAlchemy the database is checked against these classes and updated (migrated) with any changes that have taken place since we last restarted the server. If any changes have taken place in the design of the data model the database migrates to a new structure and a ‘migration’ file is generated that contains details of the changes.
Migration files themselves are pretty cool, each migration file has a ‘downgrade’ link to a previous migration file, it is possible therefore to create a linked list of migrations that explain the entire database structure. We can then upload these migrations to our GIT repository and everyone on the project can have an up to date version of the database. There are a couple of pitfalls that occur with multiple branches if many people are working on the same database, so a little co-ordination is needed, however overall its an awesome system.
Swagger takes the ball thrown by SQLAlchemy and runs with it, Swagger generates a JSON file during this process, said JSON file also contains all of the datamodels and API interfaces. In our case we use the JSON to ensure that we are sending the correct data structures to the correct API interfaces.
It’s all pretty cool and makes creating API’s a breeze, I didn’t really understand it until I started to work on the back end last week, but really making sense now
I am coming into my final week of work placement, it’s been awesome, such a learning experience. It’s going to be hard to go back to study after this 😦