VINS_MONO Preliminary Test of Hong Kong University of Science and Technology

Keywords: C++

brief introduction

VINS-Mono It's an open source VIO of Hong Kong University of Science and Technology. I tested it simply and found it works well. Make a simple note, and wait for me to finish the details.

The code is mainly divided into the front-end (feature tracker), the back-end (sliding window, loop closure), and the initialization (visual-imu aligment)

Feature tracker

This part of the code is under the feature_tracker package, mainly receives the image topic, uses KLT optical flow algorithm to track feature points, while maintaining a minimum of (100-300) feature points per frame of the image.

According to the freq in the configuration file, determine how often the detected feature points are packaged into / feature_tracker / feature topic and sent out.

If the sending time is not reached, the feature of the image will be the next moment.
The feature of KLT tracking is that not every image has to be processed. It takes a lot of time to calculate, and the data feel redundant, so the gap between frames and images will not be so obvious.

The freq configuration file here suggests setting at least 10 to ensure a good front end.

void img_callback(const sensor_msgs::ImageConstPtr &img_msg)
{
    for (int i = 0; i < NUM_OF_CAM; i++)
    {
        ROS_DEBUG("processing camera %d", i);
        if (i != 1 || !STEREO_TRACK)
        //Calling readImage of FeatureTracker
            trackerData[i].readImage(ptr->image.rowRange(ROW * i, ROW * (i + 1)));
    }

    for (unsigned int i = 0;; i++)
    {
        bool completed = false;
        for (int j = 0; j < NUM_OF_CAM; j++)
            if (j != 1 || !STEREO_TRACK)
            //Update feature ID
                completed |= trackerData[j].updateID(i);
        if (!completed)
            break;
    }
    
    //Publish feature point topic
    if (round(1.0 * pub_count / (img_msg->header.stamp.toSec() - first_image_time)) <= FREQ)
    {
        sensor_msgs::PointCloudPtr feature_points(new sensor_msgs::PointCloud);
        //id of feature point and (u,v) coordinate of image
        sensor_msgs::ChannelFloat32 id_of_point;
        sensor_msgs::ChannelFloat32 u_of_point;
        sensor_msgs::ChannelFloat32 v_of_point;

        pub_img.publish(feature_points);

    }

     if (SHOW_TRACK)
     {
        //According to the number of times the feature points are tracked, the color is displayed. The redder the color is, the longer the feature points are seen. If most of the feature points in an image are blue, the effect of the front-end tracker is poor, it is estimated that it will hang up.
        double len = std::min(1.0, 1.0 * trackerData[i].track_cnt[j] / WINDOW_SIZE);
        cv::circle(tmp_img, trackerData[i].cur_pts[j], 2, cv::Scalar(255 * (1 - len), 0, 255 * len), 2);
     }


}
void FeatureTracker::readImage(const cv::Mat &_img)
{
    //Histogram homogenization
    //if image is too dark or light, trun on equalize to find enough features
    if (EQUALIZE)
    {
        cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE(3.0, cv::Size(8, 8));
        TicToc t_c;
        clahe->apply(_img, img);
        ROS_DEBUG("CLAHE costs: %fms", t_c.toc());
    }

    if (cur_pts.size() > 0)
    {
        TicToc t_o;
        vector<uchar> status;
        vector<float> err;
        //According to cur_img,cur_pts of the previous moment, find forw_pts of the current moment.
        cv::calcOpticalFlowPyrLK(cur_img, forw_img, cur_pts, forw_pts, status, err, cv::Size(21, 21), 3);
    }

    if (img_cnt == 0)
    {
        //Remove outlier s from ransac in fundamentalMatrix
        rejectWithF();
        //The Number of Tracks with New Feature Points
        for (auto &n : track_cnt)
            n++;
        //For the following goodFeatures ToTrack, ensure that the adjacent feature points are separated by 30 pixels, and set mask image
        setMask();

        int n_max_cnt = MAX_CNT - static_cast<int>(forw_pts.size());
        if (n_max_cnt > 0)
        {
            //Ensure that each image has sufficient feature points
            cv::goodFeaturesToTrack(forw_img, n_pts, MAX_CNT - forw_pts.size(), 0.1, MIN_DIST, mask);
        }


    }
}

Slide Window

It mainly includes pre-integration of imu data, construction of vision re-projection error, loop-closure detection, slide-window maintenance, and more things.

loop-closure detection uses visual word bands, where features are not feature-tracker, which is too few. By subscribing to IMAGE_TOPIC and passing it to the closed-loop detection section, we can re-detect it, which I haven't seen yet.

The most important thing is to construct the following least squares normal equation. I list the main codes.

void Estimator::processIMU(double dt, const Vector3d &linear_acceleration, const Vector3d &angular_velocity)
{

    if (frame_count != 0)
    {
        pre_integrations[frame_count]->push_back(dt, linear_acceleration, angular_velocity);
        //Preintegral calling imu
        //if(solver_flag != NON_LINEAR)
            tmp_pre_integration->push_back(dt, linear_acceleration, angular_velocity);

        dt_buf[frame_count].push_back(dt);
        linear_acceleration_buf[frame_count].push_back(linear_acceleration);
        angular_velocity_buf[frame_count].push_back(angular_velocity);

        //Provide optimized initial values
        int j = frame_count;         
        Vector3d un_acc_0 = Rs[j] * (acc_0 - Bas[j]) - g;
        Vector3d un_gyr = 0.5 * (gyr_0 + angular_velocity) - Bgs[j];
        Rs[j] *= Utility::deltaQ(un_gyr * dt).toRotationMatrix();
        Vector3d un_acc_1 = Rs[j] * (linear_acceleration - Bas[j]) - g;
        Vector3d un_acc = 0.5 * (un_acc_0 + un_acc_1);
        Ps[j] += dt * Vs[j] + 0.5 * dt * dt * un_acc;
        Vs[j] += dt * un_acc;
    }

}

void Estimator::processImage(const map<int, vector<pair<int, Vector3d>>> &image, const std_msgs::Header &header)
{
    //Is it a key frame based on parallax?
    if (f_manager.addFeatureCheckParallax(frame_count, image))
        marginalization_flag = MARGIN_OLD;
    else
        marginalization_flag = MARGIN_SECOND_NEW;

    ImageFrame imageframe(image, header.stamp.toSec());
    imageframe.pre_integration = tmp_pre_integration;
    all_image_frame.insert(make_pair(header.stamp.toSec(), imageframe));
    tmp_pre_integration = new IntegrationBase{acc_0, gyr_0, Bas[frame_count], Bgs[frame_count]};

//Initialization process
    if (solver_flag == INITIAL)
    {
        if (frame_count == WINDOW_SIZE)
        {
            bool result = false;
            if( ESTIMATE_EXTRINSIC != 2 && (header.stamp.toSec() - initial_timestamp) > 0.1)
            {
               result = initialStructure();
               initial_timestamp = header.stamp.toSec();
            }
            if(result)
            {
                solver_flag = NON_LINEAR;
                solveOdometry();
                slideWindow();
                f_manager.removeFailures();
                ROS_INFO("Initialization finish!");
                last_R = Rs[WINDOW_SIZE];
                last_P = Ps[WINDOW_SIZE];
                last_R0 = Rs[0];
                last_P0 = Ps[0];
                
            }
            else
                slideWindow();
        }
        else
            frame_count++;
    }
    else
    {
       
        solveOdometry();

        if (failureDetection())
        {
            clearState();
            setParameter();
            return;
        }

        slideWindow();
        f_manager.removeFailures();
        // prepare output of VINS
        key_poses.clear();
        for (int i = 0; i <= WINDOW_SIZE; i++)
            key_poses.push_back(Ps[i]);

        last_R = Rs[WINDOW_SIZE];
        last_P = Ps[WINDOW_SIZE];
        last_R0 = Rs[0];
        last_P0 = Ps[0];
    }

}
void Estimator::slideWindow()
{
//Adjustment between parameters in WINDOW_SIZE while Feature Manager manages feature
}

void Estimator::solveOdometry()
{
    if (frame_count < WINDOW_SIZE)
        return;
    if (solver_flag == NON_LINEAR)
    {
        f_manager.triangulate(Ps, tic, ric);
        ROS_DEBUG("triangulation costs %f", t_tri.toc());
        optimization();
    }
}
void Estimator::optimization()
{
    //Add the state of kth frame, (p,v,q,b_a,b_g)
    for (int i = 0; i < WINDOW_SIZE + 1; i++)
    {
        ceres::LocalParameterization *local_parameterization = new PoseLocalParameterization();
        problem.AddParameterBlock(para_Pose[i], SIZE_POSE, local_parameterization);
        problem.AddParameterBlock(para_SpeedBias[i], SIZE_SPEEDBIAS);
    }
    //Add external parameters of camera-imu
    for (int i = 0; i < NUM_OF_CAM; i++)
    {
        ceres::LocalParameterization *local_parameterization = new PoseLocalParameterization();
        problem.AddParameterBlock(para_Ex_Pose[i], SIZE_POSE, local_parameterization);
    }

    //Add margination residual
    if (last_marginalization_info)
    {
        // construct new marginlization_factor
        MarginalizationFactor *marginalization_factor = new MarginalizationFactor(last_marginalization_info);
        problem.AddResidualBlock(marginalization_factor, NULL,
                                 last_marginalization_parameter_blocks);
    } 

    //Adding residual of imu
    for (int i = 0; i < WINDOW_SIZE; i++)
    {
        int j = i + 1;
        if (pre_integrations[j]->sum_dt > 10.0)
            continue;
        IMUFactor* imu_factor = new IMUFactor(pre_integrations[j]);
        problem.AddResidualBlock(imu_factor, NULL, para_Pose[i], para_SpeedBias[i], para_Pose[j], para_SpeedBias[j]);
    }

    //Adding residual of vision
    for (auto &it_per_id : f_manager.feature)
    {
        for (auto &it_per_frame : it_per_id.feature_per_frame)
        {
            imu_j++;
            if (imu_i == imu_j)
            {
                continue;
            }
            Vector3d pts_j = it_per_frame.point;
            ProjectionFactor *f = new ProjectionFactor(pts_i, pts_j);
            problem.AddResidualBlock(f, loss_function, para_Pose[imu_i], para_Pose[imu_j], para_Ex_Pose[0], para_Feature[feature_index]);
            f_m_cnt++;
        }
    }

    //Adding Closed Loop Parameters and residual
    if(LOOP_CLOSURE)
    {
        ceres::LocalParameterization *local_parameterization = new PoseLocalParameterization();
        problem.AddParameterBlock(front_pose.loop_pose, SIZE_POSE, local_parameterization);
        
        if(front_pose.features_ids[retrive_feature_index] == it_per_id.feature_id)
        {
            Vector3d pts_j = Vector3d(front_pose.measurements[retrive_feature_index].x, front_pose.measurements[retrive_feature_index].y, 1.0);
            Vector3d pts_i = it_per_id.feature_per_frame[0].point;
            
            ProjectionFactor *f = new ProjectionFactor(pts_i, pts_j);
            problem.AddResidualBlock(f, loss_function, para_Pose[start], front_pose.loop_pose, para_Ex_Pose[0], para_Feature[feature_index]);
        
            retrive_feature_index++;
            loop_factor_cnt++;
        }
    }

    //Set the optimum maximum time to ensure real-time performance
    if (marginalization_flag == MARGIN_OLD)
        options.max_solver_time_in_seconds = SOLVER_TIME * 4.0 / 5.0;
    else
        options.max_solver_time_in_seconds = SOLVER_TIME;

    if (marginalization_flag == MARGIN_OLD)
    {
        //If the current frame is the key frame, discard all the information of oldest frame and construct a new margination
    }
    else{
        //If the current frame is not a key frame, discard all visual information of the second newest frame, imu information is not discarded, and construct a new margination
    }
    
}

Follow-up

The parameters of imu are very important, and the hardware synchronization is also very important. The camera of global shutter is very important. If I move fast, the effect will not work. But people's video feels good.

This will continue to work on the hardware and code principles. The FOCAL_LENGTH feeling in the code optimization of least squares should be set according to its own camera. It has not yet been seen in detail. The setting of visual information matrix has not yet been seen.

Posted by rupam_jaiswal on Tue, 25 Jun 2019 11:50:17 -0700