# 2021.icpc network game 2

Keywords: ICPC

General idea: given an n*n grid, the water volume of each grid is m, the water of the current grid will flow to the grid with common edges around and the height is strictly greater than the same amount. If and only if the height is 0, the water will flow out, and the final water volume of each grid is calculated. n<=500

Idea: the grid with the highest height will only flow to other grids. It will be sorted directly from small to large according to the height, and the simulation will be good.

The code is as follows:

#include <bits/stdc++.h>
using namespace std;
const int N=510;
typedef pair<int,int> PII;
int n,m,k;
struct node{
int h,x,y;
bool operator <(const node &W)const
{
return h>W.h;
}
}q[N*N];
int H[N][N];
double ans[N][N];
int dx[]={-1,0,1,0},dy[]={0,1,0,-1};
int main()
{
scanf("%d%d",&n,&m);
for(int i=1;i<=n;i++)
for(int j=1;j<=n;j++)
{
scanf("%d",&H[i][j]);
q[k++]={H[i][j],i,j};
ans[i][j]=m;
}
sort(q,q+k);
for(int i=0;i<k;i++)
{
int h=q[i].h,x=q[i].x,y=q[i].y;
int cnt=0;
vector<PII> tmp;
for(int j=0;j<4;j++)
{
int a=x+dx[j],b=y+dy[j];
if(a<1||a>n||b<1||b>n)continue;
if(h>H[a][b])cnt++,tmp.push_back({a,b});
}
if(cnt)
{
double fl=ans[x][y]/cnt;
for(auto t:tmp)
{
int a=t.first,b=t.second;
ans[x][y]-=fl;
ans[a][b]+=fl;
}
}
}
for(int i=1;i<=n;i++)
{
for(int j=1;j<=n;j++)
{
if(H[i][j])printf("0 ");
else printf("%.6f ",ans[i][j]);
}
puts("");
}
return 0;
}


General idea: given a number n, it represents the number of binary bits, sgn represents the symbol a[],b [] represents the number of corresponding binary bits, ∑ i n a [ i ] s g n [ i ] ∗ 2 i \sum_i^na[i]sgn[i]*2^i ∑in​a[i]sgn[i]∗2i

Calculate the added result and output it in the form above. 30<=n<=60

Idea: the problem is guaranteed to have a solution, so direct simulation is OK. Don't worry that the problem will have no solution. The more confusing is the sgn symbol bit. If you don't consider the symbol bit, you can directly simulate the vertical operation to get the result. Considering that the symbol is actually a vertical operation, but originally there was only carry. Having a symbol here is equivalent to possible borrow. It's good to simulate.

The code is as follows:

#include <bits/stdc++.h>
using namespace std;
const int N=65;
typedef long long LL;
int sgn[N],a[N],b[N],c[N];
int n;
int main()
{
ios::sync_with_stdio(false);
cin.tie(0);
cin>>n;
for(int i=0;i<n;i++)cin>>sgn[i];
for(int i=0;i<n;i++)cin>>a[i];
for(int i=0;i<n;i++)cin>>b[i];
int f=0;//Record carry
for(int i=0;i<n;i++)
{
int t=a[i]+b[i]+f*sgn[i];//If the current bit is a minus sign, the one from the back is equal to the one to be subtracted here
if(t==-1)c[i]=1,f=-1;
else c[i]=t%2,f=t/2;
f*=sgn[i];//If the current digit is negative, entering 1 is equivalent to subtracting 1 more, that is, borrowing from the previous digit,
}
for(int i=0;i<n;i++)
{
cout<<c[i];
if(i!=n-1)cout<<" ";
}
return 0;
}


Conclusion: many things have the same essence. We can compare the unfamiliar with the familiar

1. Sort Thinking and skills

General idea: given a sequence of length N and K, ask whether sequence a can be divided into no more than k segments in order, and whether sequence a can be monotonous after several operations. If - 2 cannot be output, if yes, but the minimum number of operations required exceeds 3n, output - 1. Otherwise, output the number of operations, the segmentation mode of each operation and the combination mode of each segment (n < = 30000)

Idea: because the number of segments divided into each operation is limited by k, we start from the perspective of k,

Obviously, when k is 1, if sequence a does not meet the condition, it will not work.

When k=2, it is divided into two segments at a time. If the order of the two segments is not changed, the position of the two segments exchanged in each round is actually equivalent to the right shift operation of the sequence, and there is only one case that it can become a monotonous sequence through translation, that is, a sequence similar to "567123".

When k > = 3, we can prove that the sequence can be monotonous and non decreasing by no more than 3n operations. It is easy to prove that we can find the smallest one in the unordered sequence as the beginning of a segment, the one that has been arranged as one segment, and the rest as another segment, that is, we can divide it into no more than three segments at most n times to make the a sequence monotonous and non decreasing. It is easy to implement when k=1 and k=2, mainly when k > = 3. The complexity of violent search and translation directly explodes. We can consider dividing all the same numbers into one group, and then operate on each group. Because we want to segment, we need to know the subscript of each number, so we group the subscripts according to the size relationship, and use the tree array dimension How many numbers at the beginning of the operation are sorted, so that we can know the subscript in the sequence of the current number of any round of operation.

The code is as follows:

#include <bits/stdc++.h>
using namespace std;
const int N=1010;
int n,m,a[N],b[N],tot,_;
int tr[N];
vector<int> g[N],ans;
bool check()
{
for(int i=1;i<n;i++)if(a[i]>a[i+1])return 0;
return 1;
}
void solve()
{
cin>>n>>m;
for(int i=1;i<=n;i++)cin>>a[i],b[i]=a[i];
if(check())
{
cout<<'0';
if(_)cout<<"\n";
return ;
}
if(m==1)cout<<-2;
else if(m==2)
{
int pre=1;
while(pre<n&&a[pre]<=a[pre+1])pre++;
int suf=pre+1;
while(suf<n&&a[suf]<=a[suf+1])suf++;
if(suf==n&&a[n]<=a[1])
{
cout<<"1\n"<<"2\n";
cout<<0<<" "<<pre<<" "<<n<<"\n";
cout<<"2 1";
}
else cout<<-2;
}
else
{
for(int i=0;i<=n;i++)tr[i]=0,g[i].clear();
ans.clear();
sort(b+1,b+1+n);
tot=unique(b+1,b+1+n)-b-1;
for(int i=1;i<=n;i++)
{
int pos=lower_bound(b+1,b+1+tot,a[i])-b;
g[pos].push_back(i);
}
for(int i=tot;i;i--)
{
for(auto v:g[i])
{
int pos=v;
pos+=ask(pos);//Because it is updated from back to front, the new number is the previous number plus the number added to the beginning
if(pos==1)continue;
ans.push_back(pos);
}
}
cout<<ans.size()<<"\n";
int sz=ans.size();
for(auto v:ans)
{
sz--;
if(v==n)
{
cout<<"2\n"<<0<<" "<<v-1<<" "<<v<<"\n";
cout<<"2 1";
}
else
{
cout<<"3\n"<<0<<" "<<v-1<<" "<<v<<" "<<n<<"\n";
cout<<"2 1 3";
}
if(sz)cout<<"\n";
}
}
if(_)cout<<"\n";

}
int main()
{
ios::sync_with_stdio(false);
cin.tie(0);
cin>>_;
while(_--)solve();
return 0;
}


Conclusion: when thinking about problems, don't always think about it. Think of all the problems at once. Divide a big problem into several small problems and think step by step.

1. Leapfrog Greed, dp, thinking, problem Division

Given n Jumping Frogs, each jumping frog has an initial score a i a_i ai and several ( b i , t i ) (b_i,t_i) (bi, ti), there is only one at the beginning. You can choose one jumping frog to die at a time, and complete all the activities of the jumping frog in sequence ( b i , t i ) (b_i,t_i) (bi, ti), for each ( b i , t i ) (b_i,t_i) (bi, ti) yes t i t_i ti} operation, select one surviving jumping frog at a time and add its score b i b_i bi, and add one after the jumping frog ( b i , t i ) (b_i,t_i) (bi, ti). Ask the maximum value of the last jumping frog. ( 1 < = n < = 1 e 5 , 1 < = a i , b i < = 1 e 4 , t i = 2 or 3 ) (1 < = n < = 1E5, 1 < = a_i, b_i < = 1E4, t_i = 2 or 3) (1 < = n < = 1e5,1 < = AI, bi < = 1E4, Ti = 2 or 3)

Idea: through reading the questions, we found that for the jumping frog who chooses to die in a certain round, it is very important for the frog behind it ( b i , t i ) (b_i,t_i) (bi, ti) adding all to one jumping frog is the same as adding more than one jumping frog to the final result. By changing the selection order of Jumping Frogs, an optimal arrangement is formed, so the probability is a greedy problem. For each jumping frog (regardless of its being the last surviving jumping frog), assuming that it is selected to die in round p, its contribution to the final result is b i ∗ t i n − p b_i*t_i^{n-p} bi​∗tin−p​. t i = 2 or 3 t_i=2 or 3 ti = 2 or 3. Intuition tells us that in most cases t i = 3 t_i=3 ti = 3, the first choice is right, and the final result is better, but there are some differences b i b_i We can't do it all like this. This is what we have observed b i b_i bi ＾ the maximum is only 1e4. We can find that when the power exceeds a certain number, t=3 is much greater than t=2, b i b_i The influence of bi is almost negligible. Through calculation, it is found that when the power is > = 23, it is better to choose t=3, that is, n-p > = 23, P < = n-23. Therefore, we can first take t as the first keyword and b as the second keyword in descending order. When t and b are equal, it is obvious that the larger a is placed behind as much as possible, and a is in ascending order as the third keyword. Select the first n-23. In this way, the latter is less than 23, so it is easier Don't worry about the time complexity. Explosive search is no good, and at this time a i , b i , t i a_i,b_i,t_i ai, bi, ti ， will have an impact. It is difficult to find an optimal strategy for ranking, so we consider trying to use dp to solve the greedy problem t i ti The value of ti is divided into two groups, which is obviously based on b i b_i bi ＾ descending order. When defining the state, open a one-dimensional record to determine whether it is the last surviving jumping frog, that is, f[i,j,0/1] indicates whether the first i Jumping Frogs have been selected from the first group and whether the last surviving jumping frog has been selected. Because there are many states separated from a state, it is better to push i+1 according to i. time complexity O ( n 2 ) O(n^2) O(n2)

The code is as follows:

#include <bits/stdc++.h>
using namespace std;
const int N=100010,M=998244353;
typedef long long LL;
LL qmi(LL a,LL b)
{
LL res=1;
while(b)
{
if(b&1)res=res*a;
a=a*a;
b>>=1;
}
return res;
}
struct node{
LL a,b,t;
}v[N];
int n;
LL f[50][50][2],fac[2][N];
node w[2][50];
bool cmp1(node &A,node &B)
{
if(A.t!=B.t)return A.t>B.t;
else if(A.b!=B.b)return A.b>B.b;
return A.a<B.a;
}
bool cmp2(node &A,node &B)
{
return A.b>B.b;
}
int main()
{
ios::sync_with_stdio(false);
cin.tie(0);
for(int i= 0;i<=1;i++) {
fac[i][0] = 1;
for(int j= 1;j<= 100000;j++) fac[i][j] = fac[i][j - 1] * (i + 2) % M;
}
int _;
cin>>_;
while(_--)
{
cin>>n;
for(int i=1;i<=n;i++)
{
LL a,b,t;
cin>>a>>b>>t;
v[i]={a,b,t};
}
LL ans=0;
sort(v+1,v+1+n,cmp1);
for(int i=1;i<=n-23;i++)
{
LL b=v[i].b,t=v[i].t;
if(t==2)ans=(ans+b*fac[0][n-i])%M;
else ans=(ans+b*fac[1][n-i])%M;
}
int p=max(0,n-23)+1;
sort(v+p,v+1+n,cmp2);
memset(f,0,sizeof f);
int cnt1=0,cnt2=0,k=0;
for(int i=p;i<=n;i++)
{
if(v[i].t==2)w[0][++cnt1]=v[i];
else w[1][++cnt2]=v[i];
//cout<<v[i].a<<" "<<v[i].b<<" "<<v[i].t<<"\n";
}
k=cnt1+cnt2;
f[1][1][0]=w[0][1].b*qmi(2,k-1);
f[1][1][1]=w[0][1].a;
f[1][0][0]=w[1][1].b*qmi(3,k-1);
f[1][0][1]=w[1][1].a;
for(int i=1;i<=k-1;i++)
{
for(int c=0;c<=1;c++)// Here is a very important point. When c is 1, it means that we have selected one as the last jumping frog, and then the fast power will change
{
for(int j=0;j<=min(i,cnt1);j++)//In addition, the modulus cannot be taken when dp is calculated, because a calculated number just exceeds the modulus will become very small as soon as the modulus is taken, which will affect the final result
{

if(j+1<=cnt1)f[i+1][j+1][c]=max(f[i+1][j+1][c],f[i][j][c]+w[0][j+1].b*qmi(2,k-i-1+c));
if(i+1-j<=cnt2)f[i+1][j][c]=max(f[i+1][j][c],f[i][j][c]+w[1][i+1-j].b*qmi(3,k-i-1+c));
if(c==0)
{
if(j+1<=cnt1)f[i+1][j+1][1]=max(f[i+1][j+1][1],(f[i][j][0]+w[0][j+1].a));
if(i+1-j<=cnt2)f[i+1][j][1]=max(f[i+1][j][1],(f[i][j][0]+w[1][i+1-j].a));
}
}
}
}
cout<<(ans+f[k][cnt1][1])%M;
if(_)cout<<"\n";
}
return 0;
}


Summary: push the state transition equation. If the previously calculated state is difficult to transfer backward, you can push the state it can transfer to from the calculated state.

1. Limit

Math problems, expand with Taylor formula (I personally said I wouldn't), or calculate through lobida. Knowing the formula is signing in

1. Meal Shape pressure dp

There are n people and N dishes. People have a preference for each dish a i , j a_{i,j} ai,j and n people eat vegetables in turn, and each person chooses dishes from the rest. The probability is a i , j ∑ a i , k \frac{a_i,j}{\sum{a_i,k}} ∑ai​,kai​,j​. N < = 20, ask everyone the probability of eating each dish

Idea: first, we notice that the maximum n is only 20. Therefore, we can consider state compression. We use 1 to represent the selection of this dish, and 0 to represent non selection, because we eat dishes in turn. When there are cnt 1 in the state, we can choose dishes as the cnt person, so that we can cover all situations without repetition and omission. We use f i f_i fi ， indicates the probability when the state is i, and it is OK to calculate the new change. When the jth bit of state i is 1, it corresponds to the jth dish selected by the cnt person,

Ans [cnt, J] + = f [I ^ (1 < < J)] * the probability of choosing the j-th dish

F [i] + = f [I ^ (1 < < J)] * the probability of individual cnt choosing the j-th dish

The code is as follows:

#include <bits/stdc++.h>
using namespace std;
const int N=22, M=998244353;
typedef long long LL;
int n,m;
int f[1<<N],a[N][N],s[N],inv[N*100];
int ans[N][N];
int qmi(int a,int b)
{
int res=1;
while(b)
{
if(b&1)res=(LL)res*a%M;
a=(LL)a*a%M;
b>>=1;
}
return res%M;
}
void solve()
{
for(int i=1;i<=2000;i++)inv[i]=qmi(i,M-2);
cin>>n;
for(int i=1;i<=n;i++)
for(int j=0;j<n;j++)
{
cin>>a[i][j];
s[i]+=a[i][j];
}
f[0]=1;
for(int i=1;i<(1<<n);i++)
{
int cnt=0,tot=0;
for(int j=0;j<n;j++)if(i>>j&1)cnt++;

for(int j=0;j<n;j++)if(i>>j&1)tot+=a[cnt][j];

for(int j=0;j<n;j++)
{
if(i>>j&1)
{
int p=(LL)a[cnt][j]*inv[s[cnt]-tot+a[cnt][j]]%M;
ans[cnt][j]=(ans[cnt][j]+(LL)f[i^(1<<j)]*p%M)%M;
f[i]=(f[i]+(LL)f[i^(1<<j)]*p%M)%M;
}
}
}
for(int i=1;i<=n;i++)
{	for(int j=0;j<n;j++)
{
if(j)cout<<" ";
cout<<ans[i][j];
}
if(i!=n)cout<<"\n";
}
}
int main()
{
ios::sync_with_stdio(false);
cin.tie(0);
int _=1;
while(_--)solve();
return 0;
}


Summary: the data range is very small, especially the problems related to "choose or not". Think about the pressure

1. Euler Function Potential energy tree, thinking

Given a sequence x of length n, 1 < = x i < = 100 1<=x_i<=100 1 < = Xi < = 100, m operations, operation (1): multiply each number in an interval by w (W < = 100), operation (2): query the sum of Euler function values of all numbers in an interval. n,m<=1e5

Idea: after reading the question, our intuition tells us to use the line segment tree, but what should we do?

First, we need to know the properties of Euler function:

if(p Is a prime number)
{
if(x%p==0)f[x*p]=f[x]*p;
else f[x*p]=f[x]*(p-1);
}


Therefore, we can decompose w into prime factors to update the interval. For an interval, if all numbers in the interval contain the prime factor p, it is obvious that we can update the interval directly, but what if there are numbers that do not satisfy x%p=0? We note that the maximum w is within 100, and there are only 25 quality factors within 100. Moreover, the interval update is permanent, that is, if a prime number p is added to a number, then the prime number p will always be included. Obviously, it can be added up to 25 times. Even if it is a single point update each time, the complexity is acceptable. For subsequent operations, we can directly update the interval. In fact, this is the core idea of the potential energy line segment tree. Interval updates cannot be directly marked with laziness, but for a single point update of a node, it will not be updated too many times. For each time, we can set a flag to judge whether the recursion ends when updating. We can use bitset to indicate whether a node contains a qualitative factor, and whether an interval also contains a qualitative factor can be realized by and operation.

The code is as follows:

#include <bits/stdc++.h>
using namespace std;
const int N=100010, M=998244353;
typedef long long LL;
int n,m;
int a[N],cnt[105][30],pri[30],idx,f[105];
bitset<30> st[105];
struct node{
int l,r;
LL x,lz;
bitset<30> tg;
}tr[N*4];
int phi(int x)
{
int ans=x;
for(int i=2;i*i<=x;i++)
{
if(x%i==0)
{
ans=ans/i*(i-1);
while(x%i==0)x/=i;
}
}
if(x>1)ans=ans/x*(x-1);
return ans;
}
void init()
{
for(int i=2;i<=100;i++)
{
bool fg=1;
for(int j=2;j<=i/j;j++)
if(i%j==0)
{
fg=0;
break;
}
if(fg)pri[++idx]=i;
}
for(int i=1;i<=100;i++)
{
int tmp=i;
for(int j=1;j<=25;j++)
{
while(tmp%pri[j]==0)
{
cnt[i][j]++;
tmp/=pri[j];
}
if(cnt[i][j])st[i][j]=1;
}
}
f[1]=1;
for(int i=2;i<=100;i++)f[i]=phi(i);
}
void pushup(int u)
{
tr[u].x=(tr[u<<1].x+tr[u<<1|1].x)%M;
tr[u].tg=tr[u<<1].tg&tr[u<<1|1].tg;
}
void pushdown(int u)
{
if(tr[u].lz>1)
{
tr[u<<1].lz=tr[u<<1].lz*tr[u].lz%M;
tr[u<<1|1].lz=tr[u<<1|1].lz*tr[u].lz%M;
tr[u<<1].x=tr[u<<1].x*tr[u].lz%M;
tr[u<<1|1].x=tr[u<<1|1].x*tr[u].lz%M;
tr[u].lz=1;
}
}
void build(int u,int l,int r)
{
tr[u].l=l;
tr[u].r=r;
tr[u].lz=1;
if(l==r)
{
tr[u].x=f[a[l]];
tr[u].tg=st[a[l]];
return ;
}
int mid=l+r >> 1;
build(u<<1,l,mid);
build(u<<1|1,mid+1,r);
pushup(u);
}
LL query(int u,int l,int r)
{
if(l<=tr[u].l&&r>=tr[u].r)return tr[u].x;
pushdown(u);
LL ans=0;
int mid=tr[u].l + tr[u].r >> 1;
if(l<=mid)ans=(ans+query(u<<1,l,r))%M;
if(r>mid)ans=(ans+query(u<<1|1,l,r))%M;
return ans;
}
void updata(int u,int l,int r,int id,int c)
{
if(l<=tr[u].l&&r>=tr[u].r&&tr[u].tg[id])
{
for(int i=1;i<=c;i++)
{
tr[u].x=tr[u].x*pri[id]%M;
tr[u].lz=tr[u].lz*pri[id]%M;
}
return ;
}
if(tr[u].l==tr[u].r)
{
tr[u].x=tr[u].x*(pri[id]-1)%M;
tr[u].tg[id]=1;
for(int i=1;i<=c-1;i++)
{
tr[u].x=tr[u].x*pri[id]%M;
tr[u].lz=tr[u].lz*pri[id]%M;
}
return ;
}
pushdown(u);
int mid=tr[u].l + tr[u].r >> 1;
if(l<=mid)updata(u<<1,l,r,id,c);
if(r>mid)updata(u<<1|1,l,r,id,c);
pushup(u);
}
void solve()
{
init();
cin>>n>>m;
for(int i=1;i<=n;i++)cin>>a[i];
build(1,1,n);
while(m--)
{
int op,x,y,w;
cin>>op>>x>>y;
if(op==1)cout<<query(1,x,y)<<"\n";
else
{
cin>>w;
for(int i=1;i<=25;i++)
{
if(cnt[w][i])updata(1,x,y,i,cnt[w][i]);
}
}
}
}
int main()
{
ios::sync_with_stdio(false);
cin.tie(0);
int _=1;
while(_--)solve();
return 0;
}


Conclusion: observe the data range of the topic, which can not only be used to determine whether the algorithm is feasible, but also sometimes become a breakthrough of the topic.

Posted by wwwapu on Wed, 20 Oct 2021 14:01:25 -0700