

Interview with Darci and Khoorshid – Engineering Interns at Mission Systems
MS: Bryan, you mentioned your background includes Mechatronic Engineering and Computer Science?
​​
Bryan: Yes — I completed my undergrad in Mechatronic Engineering and Computer Science at the University of New South Wales. Then I did a PhD on frequency-modulated short-range millimetre-wave radar for collision avoidance in large vehicles.
​
MS: So from radar in large vehicles to underwater robotics — that’s quite a shift.
​
Bryan: I needed a change. I moved into underwater robotics and spent seven years at DST Group. That’s also where I met David. For the last two and a half years, I’ve been working in industry on AI-driven safety camera systems for heavy industrial equipment — construction machines, forklifts, and the like. I joined Mission Systems a month ago.
​
MS: Interesting. Now that AI is such a hot topic — especially with public-facing tools and autonomous vehicles — it’s easy to overlook the foundational work behind the scenes. You've been working on enabling tech in autonomy and machine perception long before this wave of AI hype. Would that be fair to say?
​
Bryan: That’s generous. What’s happening today with large neural networks isn't quite my area — I work more in tracking, localisation, and autonomy algorithms. Less “big data,” more precision engineering. I get a bit cranky when people talk about AI as if it’s one thing — it’s a marketing term. What I do is much more about reliable, robust algorithms and getting real robots to operate in real-world conditions, which is a very different challenge.
​
MS: Absolutely. Implementing real-world autonomy — especially in dynamic marine environments — must present a whole different layer of complexity compared to a lab simulation or desktop algorithm.
​
Bryan: Definitely. The ocean doesn’t play by the rules.
​
MS: So tell me about your role here — Maritime Autonomy Lead — and what that actually involves at Mission Systems.
​
Bryan: We’ve got a Bluefin 21 autonomous underwater vehicle (AUV) arriving soon — it’s a large, commercial-grade robot. My role is to lead the integration, testing, and enhancement of its autonomy systems. It arrives in parts, so first up is verifying hardware, assembling it properly, and confirming it survived shipping. Then comes the fun part — making it smart.
​
MS: Not building from scratch, but making it yours?
​
Bryan: Exactly. It’s a known platform, but we’ll be building out our own autonomy suite and behaviours — including collaborative autonomy where multiple AUVs share data and tasks dynamically.
​
MS: Are you planning multi-vehicle coordination straight away?
​
Bryan: That’s the long-term vision. For now, we’re focused on making one unit operational and mission-ready. But everything we design keeps that multi-agent capability in mind.​
MS: What kind of sensor payloads does the Bluefin 21 currently support, and are you looking to expand or swap those out?
​​
Bryan: Right now, the Bluefin 21 comes equipped with a standard suite including side-scan sonar, Doppler velocity logs (DVL), inertial navigation systems (INS), and a CTD sensor for conductivity, temperature, and depth. We're also exploring options to integrate custom payloads — things like advanced acoustic modems, modular environmental sensors, or even compact manipulator arms depending on the mission. It’s all about flexibility and building modularity into the architecture.
MS: And what kind of missions is the robot expected to perform? I understand there are limitations on what can be disclosed publicly.
​
Bryan: Broadly, the robot navigates via waypoints for now. Our goal is to move toward dynamic mission planning where it can re-prioritise tasks or accept new ones on the fly, depending on real-time sensor input and mission parameters.
​
MS: So eventually it won’t just follow instructions — it will reason about them?
​​
Bryan: Exactly — autonomy means not just reacting, but adapting intelligently.
​​​
MS: Is that reasoning being done onboard, or offloaded to a surface system?
​
Bryan: Ideally we will do both. There are avenges to surface vehicles as there are to underwatervehicles as there are to aerial vehicles. They each have their own place in getting tests done. You want the robot to be able to figure things out for itself, but you also want to be able to relay stuff back to a surface vehicle so the surface vehicle can do useful things or can relay information back to people elsewhere. And then you might want to have the tasking go down from the surface vehicle to the underwater vehicle, or from a person to an aerial vehicle, relaying to the surface vehicle, relaying it to the underwater vehicle. Because the underwater vehicle communications won’t get all the way back to ground control. You have to have different vehicles working together to pass the messages, and therefore you have to decide where the reasoning sits. That often depends on the task. It might be completely embodied inside the underwater vehicle. It might be disbursed. And sometimes there’s a human somewhere on the loop just passing jobs down to the vehicles.
​
MS: You mentioned you're using MOOS-IvP — a middleware framework developed at MIT. That’s quite a robust system for maritime autonomy. Are you modifying core modules or just building behaviours on top?
​​
Bryan: A mix of both. We’re leveraging existing capabilities but developing customised behaviours and message protocols for our specific use cases. One of the big challenges is compressing critical communication into minimal acoustic bandwidth — underwater comms are low-bandwidth and noisy, so every bit counts.
​
MS: I imagine you’re dealing with a lot of data modelling too — especially around path planning and environment sensing?
​
Bryan: Yes. From ocean current modelling to bathymetry, we build out situational models and behaviours to adjust accordingly. For example, we don’t want the vehicle drifting off-course because it’s not aligned with the current. And you can’t just “Google Maps” your way out of a shallow reef.
​​
MS: Do you see the vehicle using live sensor fusion for decision-making, or relying more on preloaded charts and models?
​​
Bryan: Both! so you’ll start with a preloaded chart and some sort of current environment data you can get from a forecast and then it will adapt to that on the fly based on what sensors are getting in, so, just like in any real-life scenario, you start with a plan and then you adapt the plan based on the what’s actually happening.
​​
MS: Before it ever hits the water, everything is tested in a simulation, right?
​​
Bryan: Absolutely. We use robust simulations to test mission plans, vehicle dynamics, and autonomy behaviours. That includes synthetic sensor data. You don’t want to discover a flaw while on a boat, paying thousands in operational costs — or worse, losing the vehicle.
​
MS: Makes perfect sense. Can you walk me through what “unboxing” the Bluefin 21 actually looks like?
​
Bryan: Picture a 3.5-tonne Christmas morning. We’re expecting two vehicles — one for ops, one likely for spares — plus crates of sensors, comms gear, and oceanographic tools. The fun part is deciding what to keep and what to quietly list on eBay.
​​
MS: “Mission Systems Clearance Store” — niche but fascinating!
​​
Bryan: Exactly.​
MS: What are the biggest technical risks you anticipate in getting this first vehicle operational?
​​
Bryan: Well, because it’s a new vehicle, that could be literally anything. The first decision is where to physically open the box! Plus we won’t know what may have happened during shipping. I haven’t personally worked with a Bluefin vehicle before and just like cars or boats, every vehicle brand has their own peculiarities, so that will be interesting. I suppose the biggest part is that it’s big and it’s heavy and we have a very steep driveway so getting in and out of the water is going be interesting! But we have great resources and excellent people here, so, I’m optimistic!
​​
MS: And what’s your personal approach to balancing innovation with operational reliability? Especially in a mission-critical setting like this?
​
Bryan: Good question! You crawl and then you walk and then you run. Start with basic things and get those right. You add more complexity once you proven that the basic stuff is working very reliably and then you add more and more creative things. Plus you have fallback plans if things don’t go as expected. For example if something doesn’t go as you wanted we can adapt on the fly. It might be necessary to change out what our plan is if we figure something else isn’t going to work. Simulation helps as well!
MS: Final question — how do you see this field evolving in the next five years?
​
Bryan: I think the future lies in interoperable, multi-agent systems — robots that don’t just operate in isolation, but with each other. They’ll share data, adjust behaviours dynamically, and plug into wider networks — across defence, research, and commercial industries. But to get there, we need robust standards, better acoustic protocols, and a deep understanding of the environment we’re sending them into.