AnonSec Shell
Server IP : 92.204.138.22  /  Your IP : 18.191.225.71
Web Server : Apache
System : Linux ns1009439.ip-92-204-138.us 4.18.0-553.8.1.el8_10.x86_64 #1 SMP Tue Jul 2 07:26:33 EDT 2024 x86_64
User : internationaljou ( 1019)
PHP Version : 7.4.33
Disable Function : NONE
MySQL : OFF  |  cURL : ON  |  WGET : ON  |  Perl : ON  |  Python : ON  |  Sudo : ON  |  Pkexec : ON
Directory :  /usr/share/doc/perl-Digest-SHA/examples/

Upload File :
current_dir [ Writeable ] document_root [ Writeable ]

 

Command :


[ HOME ]     

Current File : /usr/share/doc/perl-Digest-SHA/examples/dups
#!/usr/bin/perl

	# dups: simple script for showing duplicate files

=head1 NAME

dups - Show Duplicate Files

=head1 SYNOPSIS

 Usage: dups files ...

 dups is a fast script for discovering duplicate files.  It
 achieves its efficiency by comparing file digests rather than the
 file contents themselves, the latter being much larger in general.

 The NIST Secure Hash Algorithm (SHA) is highly collision-resistant,
 meaning that two files with the same SHA digest have an almost
 certain probability of being identical.

 The dups script works by computing the SHA-1 digest of each file
 and looking for matches.  The search can reveal more than one set
 of duplicates, so the output is written as follows:

 match1_file1
	match1_file2
	match1_file3
	etc.

 match2_file1
	match2_file2
	etc.

=head1 AUTHOR

Mark Shelor <mshelor@cpan.org>

=head1 SEE ALSO

Perl module L<Digest::SHA> or L<Digest::SHA::PurePerl>

=cut

use strict;
use Digest::SHA;

die "usage: dups files ...\n" unless @ARGV;

my @files = grep { -f $_ } @ARGV;

my %dups;
for (@files) {
	my $digest = Digest::SHA->new->addfile($_, "b")->hexdigest;
	push(@{$dups{$digest}}, $_);
}

for (keys %dups) {
	my $ref = $dups{$_};
	if (scalar(@$ref) > 1) {
		print join("\n\t", @$ref), "\n\n";
	}
}

Anon7 - 2022
AnonSec Team