Firstly, we attach two results maps. Our main functions are the above two. We can browse the panorama, put panoramic markers on the map and realize attribute query.
Note: Not all pictures can be used as panoramas. You can download panoramas from special websites and use them. Some bad panoramas can also cause problems of convergence.
I. Basic Principles
We can imagine the object we observe as a sphere, and a panorama is a picture attached to the sphere. We can change our perspective by moving the mouse to reach other parts of the panorama.
I guess I made the sphere very small (like the two pictures above) so that I could see it intuitively. And through a specific camera perspective and perspective projection We can make the panorama deform as little as possible.
When we change the angle of view, or the focal length of the camera slightly, or make the radius of the sphere larger, we can not find the existence of the sphere, so we have the feeling of immersion.
Another function is to mark our specific poi points in the panorama. This is not a simple problem, because we need to record the location of poi points in the database, and then follow the changes in the position of our panorama, which involves the coordinate transformation, we will discuss below.
I. Parameter initialization
In the last blog post, I mentioned several elements to generate a treejs scene, and there are no more than a few elements in the panorama (camera, renderer, scene).
1.1 Initialization Camera
The best way to make a panoramic view better is to use perspective projection: the projection pattern is designed to imitate the way the human eye sees it. It is the most commonly used projection mode for rendering 3D scenes.
camera = new THREE.PerspectiveCamera( fov, window.innerWidth / window.innerHeight, 1, 1100 );
camera.target = new THREE.Vector3( 0, 0, 0 );
Our markup is not a three-dimensional element, its size does not need to change with our perspective, so we use orthographic projection: in this projection mode, the size of the object in the rendered image is independent of the distance of the camera, and remains unchanged. This can be used to render 2D scenes, UI elements, etc.
cameraOrtho = new THREE.OrthographicCamera( - window.innerWidth / 2, window.innerWidth / 2, window.innerHeight / 2, - window.innerHeight / 2, 1, 10 );
cameraOrtho.position.z = 10;
1.2 Initialization scenario
Scenarios also need to initialize two:
scene = new THREE.Scene();
sceneOrtho = new THREE.Scene();
1.3 Panoramic Initialization
Create a spherical collection of panoramic views:
mesh = new THREE.Mesh( new THREE.SphereGeometry( 2, 60, 40 ),
new THREE.MeshBasicMaterial(
{ map: THREE.ImageUtils.loadTexture( 'textures/pano-1.jpg' ) }
) );
mesh.scale.x = -1;
scene.add( mesh );
The first parameter is the radius, which must be greater than 1 or less than a certain number. The second and third parameters are the number of longitude and latitude slices. If the value is too small, the panorama will be deformed. The bigger the value, the finer the value, but the performance will be affected.
1.4 Create a Render
renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.autoClear = false;
container.appendChild( renderer.domElement );
1.5 Initialization Animation
function animate() {
requestAnimationFrame( animate );
renderer.clear();
renderer.render( scene, camera );
}
II. Achieving a Mobile Perspective
2.1 Event Monitoring
Event monitoring for dom and window s
document.addEventListener( 'mousedown', onDocumentMouseDown, false );
document.addEventListener( 'mousemove', onDocumentMouseMove, false );
document.addEventListener( 'mouseup', onDocumentMouseUp, false );
window.addEventListener( 'resize', onWindowResize, false );
2.2 Mouse Press Event
function onDocumentMouseDown( event ) {
event.preventDefault();
isUserInteracting = true;
onPointerDownPointerX = event.clientX;
onPointerDownPointerY = event.clientY;
onPointerDownLon = lon;
onPointerDownLat = lat;
}
IsUser Interacting is used to identify whether the user presses the mouse or not, and record the screen coordinates and the initial longitude and latitude of the mouse at this time.
Note that this longitude and latitude is not the traditional sense of longitude and latitude, we imitate the Earth's longitude and latitude virtual coordinates, used to mark any position on the sphere, easy to calculate.
2.3 Mouse movement, pop-up event
The longitude and latitude of screen center movement are calculated by the distance of mouse movement.
function onDocumentMouseMove( event ) {
if ( isUserInteracting ) {
lon = ( onPointerDownPointerX - event.clientX ) * 0.1 + onPointerDownLon;
lat = ( event.clientY - onPointerDownPointerY ) * 0.1 + onPointerDownLat;
}
}
function onDocumentMouseUp( event ) {
isUserInteracting = false;
}
2.4 Animation Rendering
At this time, we just changed the value of the global variables, and did not produce practical results. We also need to add the following code in the animation to achieve real-time rendering:
Convert longitude and latitude into world coordinates:
lat = Math.max( - 85, Math.min( 85, lat ) );//Latitude limit - 85 - 85 degrees
phi = THREE.Math.degToRad( 90 - lat );
theta = THREE.Math.degToRad( lon );
camera.target.x = pRadius * Math.sin( phi ) * Math.cos( theta );
camera.target.y = pRadius * Math.cos( phi );
camera.target.z = pRadius * Math.sin( phi ) * Math.sin( theta );
camera.lookAt( camera.target );
2.5 Adaptation Form
When our window size changes, we adjust the panoramic size
window.addEventListener( 'resize', onWindowResize, false );
function onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.projectionMatrix.makePerspective( fov, camera.aspect, 1, 1100 );
camera.updateProjectionMatrix();
cameraOrtho.left = - window.innerWidth / 2;
cameraOrtho.right = window.innerWidth / 2;
cameraOrtho.top = window.innerHeight / 2;
cameraOrtho.bottom = - window.innerHeight / 2;
cameraOrtho.updateProjectionMatrix();
renderer.setSize( window.innerWidth, window.innerHeight );
}
Now we can see the panorama and move the view, and then add the tag to the panorama.
3. Adding Markers
3.1 Adding Markers
Virtual several tags and add them to the array:
var p1={lon:158,lat:7};
var p2={lon:120,lat:13};
var p3={lon:-120,lat:10};
sprites.push(createSprite(p1,"textures/school.png","Chemical Laboratory"));
sprites.push(createSprite(p2,"textures/school.png","Physics Laboratory"));
sprites.push(createSprite(p3,"textures/food.png","Restaurant"));
Create markup functions
function createSprite(position,url,name){
var textureLoader = new THREE.TextureLoader();
var ballMaterial = new THREE.SpriteMaterial({
map: textureLoader.load( url )//( 'textures/sprite1.png' )
});
var sp1={
pos:position,
name:name,
sprite:new THREE.Sprite(ballMaterial)
};
sp1.sprite.scale.set(32, 32, 1.0);
sp1.sprite.position.set(0, 0, 0);
sp1.sprite.name=name;
sceneOrtho.add(sp1.sprite);
clickableObjects.push(sp1.sprite);
return sp1;
}
Here we store tags in two different arrays, clickable Objects for search and sprites arrays for longitude and latitude, for some calculations in animation functions:
if(typeof (sprites)!="undefined") {
for (var i = 0; i < sprites.length; i++) {
var wp = geoPosition2World(sprites[i].pos.lon,sprites[i].pos.lat);
var sp = worldPostion2Screen(wp, camera);
var test=wp.clone();
test.project(camera);
//Judging whether or not in the field of vision, the process of rotating the sphere may turn out of our field of vision and hide these icons.
if (test.x > -1 && test.x < 1 && test.y > -1 && test.y < 1 && test.z > -1 && test.z < 1) {
sprites[i].sprite.scale.set(32, 32,32);
sprites[i].sprite.position.set(sp.x, sp.y, 1);
}
else {
sprites[i].sprite.scale.set(1.0,1.0,1.0);
sprites[i].sprite.position.set(0,0,0);
}
}
}
This is the core function of adding markers. First, the longitude and latitude coordinates of markers are converted into world coordinates. Then we normalize them. The significance of normalization is to judge whether they are in our vision (x,y coordinates are in our vision if they belong to -1 to 1).
Because we can't see all the parts of the sphere, if it's marked in our field of vision, we need to add it to the screen.
Latitude and longitude coordinates are then converted into screen coordinates and added to the field of vision.
3.2 Implementing Markup Attribute Query
Monitor click events:
document.addEventListener( 'click', onDocumentMouseClick, false);
function onDocumentMouseClick(event){
mouse.x = ( event.clientX/ window.innerWidth ) * 2 - 1;
mouse.y = - ( event.clientY / window.innerHeight ) * 2 + 1;
raycaster.setFromCamera( mouse, cameraOrtho );
var intersects = raycaster.intersectObjects( clickableObjects );
intersects.forEach(function(element){
alert("Intersection: " + element.object.name);
});
}
If there is no problem in writing, the above code can achieve the following effect:
The following are all source codes:
//Three elements, camera, scene, renderer
var camera, scene, renderer;
//Positive Projection Camera and Scene
var cameraOrtho, sceneOrtho;
//Used to assist mouse pickup
var raycaster
//Panoramic Markup Point Array
var sprites = [];
//An array for querying panoramic markup points
var clickableObjects = [];
//Parameters to determine whether the user has an operation or not
var isUserInteracting = false;
//Longitude and Latitude of Screen Focus
var lon =0,lat=0;
//Temporary longitude and latitude
var onPointerDownLon = 0, onPointerDownLat =0;
//Temporary xy
var onPointerDownPointerX = 0, onPointerDownPointerY = 0;
//Mouse 2D coordinates
var mouse = new THREE.Vector2();
//focal length
var fov = 75;
//Spherical radius
var pRadius = 1000;
//Initialize various parameters
init();
//Initialization animation
animate();
function init() {
//Containers for fusing aggregates and
var container, mesh;
//Getting containers
container = document.getElementById( 'container' );
//Initialize the camera (perspective projection) to imitate the way the human eye sees it. It is the most commonly used projection mode for rendering 3D scenes.
camera = new THREE.PerspectiveCamera( fov, window.innerWidth / window.innerHeight, 1, 1100 );
camera.target = new THREE.Vector3( 0, 0, 0 );
//Initial camera (orthographic projection) renders the size of the object in the image independent of the distance of the camera and remains unchanged. This can be used to render 2D scenes, UI elements, etc.
cameraOrtho = new THREE.OrthographicCamera( - window.innerWidth / 2, window.innerWidth / 2, window.innerHeight / 2, - window.innerHeight / 2, 1, 10 );
cameraOrtho.position.z = 10;
//Initialization scenarios (normal and orthographic)
scene = new THREE.Scene();
sceneOrtho = new THREE.Scene();
//Help mouse pick up elements
raycaster = new THREE.Raycaster();
//Create grids, merge geometries and materials, and add them to the scene
//Create a circular geometry (the first parameter is the radius, must be greater than 1 less than a certain number, the second three parameters are the number of longitude and latitude slices, the value is too small panorama will be deformed, the larger the value, the more fine, but will affect the performance)
mesh = new THREE.Mesh( new THREE.SphereGeometry( 2, 60, 40 ),
new THREE.MeshBasicMaterial(
{ map: THREE.ImageUtils.loadTexture( 'textures/pano-1.jpg' ) }
) );
mesh.scale.x = -1;
scene.add( mesh );
//Create a renderer and add it to the scene
renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.autoClear = false;
container.appendChild( renderer.domElement );
//Initialize panoramic markers
initLable();
document.addEventListener( 'mousedown', onDocumentMouseDown, false );
document.addEventListener( 'mousemove', onDocumentMouseMove, false );
document.addEventListener( 'mouseup', onDocumentMouseUp, false );
document.addEventListener( 'click', onDocumentMouseClick, false);
window.addEventListener( 'resize', onWindowResize, false );
}
/*
* MouseDown
* */
function onDocumentMouseDown( event ) {
event.preventDefault();
isUserInteracting = true;
onPointerDownPointerX = event.clientX;
onPointerDownPointerY = event.clientY;
onPointerDownLon = lon;
onPointerDownLat = lat;
}
/*
* Mouse movement events
* */
function onDocumentMouseMove( event ) {
if ( isUserInteracting ) {
lon = ( onPointerDownPointerX - event.clientX ) * 0.1 + onPointerDownLon;
lat = ( event.clientY - onPointerDownPointerY ) * 0.1 + onPointerDownLat;
}
}
/*
* Mouse pop-up event
* */
function onDocumentMouseUp( event ) {
isUserInteracting = false;
}
/*
* Single mouse query marker points
* */
function onDocumentMouseClick(event){
mouse.x = ( event.clientX/ window.innerWidth ) * 2 - 1;
mouse.y = - ( event.clientY / window.innerHeight ) * 2 + 1;
raycaster.setFromCamera( mouse, cameraOrtho );
var intersects = raycaster.intersectObjects( clickableObjects );
intersects.forEach(function(element){
alert("Intersection: " + element.object.name);
});
}
/*
* Form change events
* */
function onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.projectionMatrix.makePerspective( fov, camera.aspect, 1, 1100 );
camera.updateProjectionMatrix();
cameraOrtho.left = - window.innerWidth / 2;
cameraOrtho.right = window.innerWidth / 2;
cameraOrtho.top = window.innerHeight / 2;
cameraOrtho.bottom = - window.innerHeight / 2;
cameraOrtho.updateProjectionMatrix();
renderer.setSize( window.innerWidth, window.innerHeight );
}
//Initialization of Panoramic Markup Function
function initLable() {
var p1={lon:158,lat:7};
var p2={lon:120,lat:13};
var p3={lon:-120,lat:10};
var p4={lon:-180,lat:2};
var p5={lon:-150,lat:5};
var p6={lon:-20,lat:20};
var p7={lon:50,lat:10};
sprites.push(createSprite(p1,"textures/school.png","Chemical Laboratory"));
sprites.push(createSprite(p2,"textures/school.png","Physics Laboratory"));
sprites.push(createSprite(p3,"textures/food.png","Restaurant"));
sprites.push(createSprite(p4,"textures/home.png","Gym"));
sprites.push(createSprite(p5,"textures/home.png","apartment"));
sprites.push(createSprite(p6,"textures/shop.png","Library"));
sprites.push(createSprite(p7,"textures/hospital.png","Clinic"));
}
/*
* Create annotations
* position: lable Location of tags (lon, lat)
* **/
function createSprite(position,url,name){
var textureLoader = new THREE.TextureLoader();
var ballMaterial = new THREE.SpriteMaterial({
map: textureLoader.load( url )//( 'textures/sprite1.png' )
});
var sp1={
pos:position,
name:name,
sprite:new THREE.Sprite(ballMaterial)
};
sp1.sprite.scale.set(32, 32, 1.0);
sp1.sprite.position.set(0, 0, 0);
sp1.sprite.name=name;
sceneOrtho.add(sp1.sprite);
clickableObjects.push(sp1.sprite);
return sp1;
}
function animate() {
requestAnimationFrame( animate );
render();
}
function render() {
/*
* 1,POI Points are located by editing tools (lon, lat)
* 2,Divide the panoramic view into four parts according to longitude, each 90 degrees (0-90, 90-180, 180-270, 270-360)
* 3,In the rendering thread, query the POI that needs to be drawn according to the longitude range of the current camera: how to calculate the longitude and latitude according to the screen coordinates
* Let the longitude of the center of the current camera be Lonn and the width of the camera be num. The longitude range of the current camera is lonn-num - --------------- Lonn + num.
* 4,Converting the longitude and latitude position of POI to screen coordinates for POI rendering
*
* */
lat = Math.max( - 85, Math.min( 85, lat ) );//Latitude limit - 85 - 85 degrees
phi = THREE.Math.degToRad( 90 - lat );
theta = THREE.Math.degToRad( lon );
camera.target.x = pRadius * Math.sin( phi ) * Math.cos( theta );
camera.target.y = pRadius * Math.cos( phi );
camera.target.z = pRadius * Math.sin( phi ) * Math.sin( theta );
camera.lookAt( camera.target );
if(typeof (sprites)!="undefined") {
for (var i = 0; i < sprites.length; i++) {
var wp = geoPosition2World(sprites[i].pos.lon,sprites[i].pos.lat);
var sp = worldPostion2Screen(wp, camera);
var test=wp.clone();
test.project(camera);
//Judging whether or not in the field of vision, the process of rotating the sphere may turn out of our field of vision and hide these icons.
if (test.x > -1 && test.x < 1 && test.y > -1 && test.y < 1 && test.z > -1 && test.z < 1) {
sprites[i].sprite.scale.set(32, 32,32);
sprites[i].sprite.position.set(sp.x, sp.y, 1);
}
else {
sprites[i].sprite.scale.set(1.0,1.0,1.0);
sprites[i].sprite.position.set(0,0,0);
}
}
}
renderer.clear();
renderer.render( scene, camera );
renderer.clearDepth();
renderer.render( sceneOrtho, cameraOrtho );
}
/*
* Conversion of longitude and latitude coordinates to world coordinates
* lon: longitude
* lat: latitude
* */
function geoPosition2World(lon,lat){
lat = Math.max( - 85, Math.min( 85, lat ) );
var phi = THREE.Math.degToRad( 90 - lat);
var theta = THREE.Math.degToRad( lon );
var result={
x:pRadius * Math.sin( phi ) * Math.cos( theta ),
y:pRadius * Math.cos( phi ),
z:pRadius * Math.sin( phi ) * Math.sin( theta )
}
return new THREE.Vector3(result.x,result.y,result.z);
}
/*
* Conversion of world coordinates to screen coordinates
* */
function worldPostion2Screen(world_vector,camera){
var vector=world_vector.clone();
vector.project(camera);
var result = {
//SPriter takes the center of the screen as its origin
x: Math.round((vector.x + 1 ) * window.innerWidth / 2 -window.innerWidth / 2),
y: Math.round(window.innerHeight / 2-(-vector.y + 1 ) * window.innerHeight / 2 ),
//Starting at the top left corner of the screen
//x: Math.round((vector.x + 1 ) * window.innerWidth / 2),
// y: Math.round(-(-vector.y + 1 ) * window.innerHeight / 2 ),
z:0
};
return new THREE.Vector3(result.x,result.y,result.z);
}