Sensor Field of View Volumes |
Field of view volumes define the region of space that a sensor is able to detect. They play an important role in computing access by limiting the visibility of an object to the time periods when it is within the detectable region of space defined by the volume.
There are four types in the class library that can be used to model a field of view volume: RectangularPyramid, ComplexConic, SyntheticApertureRadarVolume, and CustomSensorPattern. Each of these types have specific characteristics that are used to define their detectable region of space. See the reference documentation for each type for more information about each type of volume.
The following example shows how to configure a complex conic field of view as a simple cone.
// This cone has a full angle of 60 degrees and no holes. double innerHalfAngle = 0.0; // There is no inner exclusion cone. double outerHalfAngle = Math.PI / 6.0; // 30 degree half angle // The clock angle range is a full circle. double minimumClockAngle = 0.0; double maximumClockAngle = Constants.TwoPi; // The cone will have no radial limit. double radius = double.PositiveInfinity; ComplexConic simpleCone = new ComplexConic(); simpleCone.SetHalfAngles(innerHalfAngle, outerHalfAngle); simpleCone.SetClockAngles(minimumClockAngle, maximumClockAngle); simpleCone.Radius = radius;
Once the field of view has been configured, it can be applied to a Platform using a FieldOfViewExtension.
Platform sensorPlatform = new Platform(); sensorPlatform.LocationPoint = CreateAnyOldPoint(); sensorPlatform.OrientationAxes = CreateAnyOldAxes(); FieldOfViewExtension fovExtension = new FieldOfViewExtension(simpleCone); sensorPlatform.Extensions.Add(fovExtension);
The sensor adopts the position and orientation of the Platform to which it is attached. Its boresight is down the platform's Z-axis and the platform's X-axis is considered "up."
The FieldOfViewExtension can be used to easily determine if a Point is inside the field of view volume at a given time.
JulianDate date = new GregorianDate(2009, 8, 26, 12, 0, 0.0).ToJulianDate(); Point pointOfInterest = CreateSomeOtherPoint(); Evaluator<bool> evaluator = fovExtension.GetPointIsInFieldOfViewEvaluator(pointOfInterest); bool pointIsInVolume = evaluator.Evaluate(date);
The extension can also be used to find the boundary of the projection of the field of view at a given time.
EarthCentralBody earth = CentralBodiesFacet.GetFromContext().Earth; SensorProjectionEvaluator projectionEvaluator = fovExtension.GetSensorProjectionEvaluator(earth); SensorProjection projection = projectionEvaluator.Evaluate(date);
A SensorProjection has two collections of boundaries, SurfaceBoundaries and SpaceBoundaries. Surface boundaries are formed where the sensor volume intersects with the Earth or other central body. Space boundaries represent the portion of the sensor volume that does not intersect with the central body, projected to a radial limit or into a plane perpendicular to the sensor boresight. The exact nature of the space boundaries can be controlled by specifying SensorProjectionOptions in the call to GetSensorProjectionEvaluator.
Finally, a Platform with a FieldOfViewExtension can be used to constrain access by applying a SensorVolumeConstraint to the sensor Platform.
// Create a sensor volume constraint so that that volume is actually used to constraint access. SensorVolumeConstraint constraint = new SensorVolumeConstraint(); // Create a facility - we will be determining access from the sensor to the facility. Platform facility = new Platform(); facility.LocationPoint = CreateSomeOtherPoint(); facility.OrientationAxes = CreateAnyOldAxes(); //Create a link in between the facility and sensor platform (ignoring light time delay) LinkInstantaneous link = new LinkInstantaneous(facility, sensorPlatform); // Add the sensor volume constraint as a constraint on the receiver (the object with the sensor). constraint.ConstrainedLink = link; constraint.ConstrainedLinkEnd = LinkRole.Receiver; // Add a central body obstruction constraint so that the earth obstructs access as well. // Otherwise, access is determined to exist any time the facility is inside the receiver's // sensor volume, even if the facility is on the other side of the Earth from the sensor. CentralBodyObstructionConstraint constraint2 = new CentralBodyObstructionConstraint(); constraint2.ConstrainedLink = link; constraint2.ConstrainedLinkEnd = LinkRole.Transmitter; //Create an AccessQuery that is satisfied when both of the other constraints are satisfied. AccessQueryAnd access = new AccessQueryAnd(constraint, constraint2); //Create the access evaluator. An access evaluator generally needs a specified //observer. This is because access computations with light time delays can cause //different platforms to have different time intervals for the same constraints. //However, AccessQueries that are purely made out of instantaneous links (like here) //do not need a specified observer. AccessEvaluator accessEvaluator = access.GetEvaluator(); // Compute the access with the sensor volume constraint and central body obstruction constraint. AccessQueryResult accessResult = accessEvaluator.Evaluate(start, end); TimeIntervalCollection intervals = accessResult.SatisfactionIntervals;
It is possible to use the SensorFieldOfView directly, without the FieldOfViewExtension. This is a lower-level interface and is a bit more complex to use, but it allows some additional flexibility. The main difference is that a stand-alone FieldOfViewExtension has no knowledge of its position or orientation. So when determining whether or not a point is inside the field of view volume, for example, the coordinates of the point must be specified directly in the sensor's reference frame. The following code shows how to determine if a point, given as a Cartesian specified in the sensor's reference frame, is inside the sensor volume.
bool pointIsInside = simpleCone.Encloses(new Cartesian(0.0, 0.0, 100.0)); // true - That point was along the axis of the cone. pointIsInside = simpleCone.Encloses(new Cartesian(1.0, 1.0, 0.0)); // false - That point was in the x-y plane of the cone along with the vertex of the cone.
Similarly, the boundaries of the field of view projected onto an ellipsoid and into space can be obtained directly from the SensorFieldOfView. The GetProjection method requires the ellipsoid shape onto which the field of view volume is to be projected and the KinematicTransformation that relates the reference frame of the ellipsoid to the reference frame of the field of view.
// Get the default shape of the Earth ellipsoid. Ellipsoid ellipsoid = CentralBodiesFacet.GetFromContext().Earth.Shape; // Form a kinematic transformation that places the reference frame of the volume // on the x-axis of the ellipsoid and pointing back at the ellipsoid. // Place the origin 300 km above the ellipsoid surface. Cartesian position = new Cartesian(ellipsoid.SemiaxisLengths.X + 300000.0, 0.0, 0.0); // This rotation points the z-axis of the volume back along the x-axis of the ellipsoid. UnitQuaternion rotation = new UnitQuaternion(new ElementaryRotation(AxisIndicator.Second, -Constants.HalfPi)); // Create the full kinematic transformation from the position and rotation. // Note that, in this example, the origin and axes of the volume are not moving with // respect to the reference frame of the ellipsoid. Motion<Cartesian> translationalMotion = new Motion<Cartesian>(position, Cartesian.Zero, Cartesian.Zero); Motion<UnitQuaternion, Cartesian> rotationalMotion = new Motion<UnitQuaternion, Cartesian>(rotation, Cartesian.Zero, Cartesian.Zero); KinematicTransformation transformation = new KinematicTransformation(translationalMotion, rotationalMotion); // Now get the boundary of the projection onto the surface of the ellipsoid and into space. SensorProjection sensorProjection = simpleCone.GetProjection(ellipsoid, transformation); // Figure out how many distinct surface boundaries there are. // In this example, there is only one distinct boundary and it has no holes. int numberOfSurfaceBoundaries = sensorProjection.SurfaceBoundaries.Count; // Now get points that 10%, 50%, and 90% of the way around the surface boundary. Curve boundaryCurve = sensorProjection.SurfaceBoundaries[0].BoundaryCurve; Cartesian tenPercent = boundaryCurve.InterpolateUsingFraction(0.1); Cartesian fiftyPercent = boundaryCurve.InterpolateUsingFraction(0.5); Cartesian ninetyPercent = boundaryCurve.InterpolateUsingFraction(0.9);
The sensors described so far can change their orientation or pointing with time - the sensor rotates with the platform to which it is attached. But what if we want to change the shape of the sensor with time? For example, an optical sensor is initially zoomed out in order to take a picture of a wide area. Then, when a particular target of interest is identified, it zooms in to take a detailed picture of the target. If the sensor is modeled as a RectangularPyramid field of view, we want to model this zooming behavior as changes to the XHalfAngle and YHalfAngle properties.
One obvious way to achieve this is to actually change the values of the half angle properties. This is not recommended, however. For one thing, it won't work when using the sensor in a larger computation such as access, because there's no easy place to make the required changes. Furthermore, any evaluators that are dependent on the sensor volume will need to be recreated after changing the volume definition in order for the change to be reflected in the evaluation. Recreating evaluators frequently like this is a recipe for poor performance. See the Evaluators And Evaluator Groups topic for more information.
Fortunately, DME Component Libraries provides a solution in the form of the DynamicSensorFieldOfView class. To create a sensor that changes shape over time, derive a new class from DynamicSensorFieldOfView and implement its evaluator to return the appropriate shape for the time. You can then attach it to a platform using DynamicFieldOfViewExtension instead of FieldOfViewExtension. From there, the dynamic field of view can be used in access and projection operations in the same way as a static one.